00:00:00.001 Started by upstream project "autotest-per-patch" build number 126214 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.102 The recommended git tool is: git 00:00:00.102 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.155 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.196 Using shallow fetch with depth 1 00:00:00.196 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.196 > git --version # timeout=10 00:00:00.234 > git --version # 'git version 2.39.2' 00:00:00.234 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.588 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.600 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.612 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.612 > git config core.sparsecheckout # timeout=10 00:00:05.624 > git read-tree -mu HEAD # timeout=10 00:00:05.641 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.661 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.661 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.766 [Pipeline] Start of Pipeline 00:00:05.782 [Pipeline] library 00:00:05.785 Loading library shm_lib@master 00:00:07.283 Library shm_lib@master is cached. Copying from home. 00:00:07.314 [Pipeline] node 00:00:07.386 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.389 [Pipeline] { 00:00:07.404 [Pipeline] catchError 00:00:07.406 [Pipeline] { 00:00:07.425 [Pipeline] wrap 00:00:07.435 [Pipeline] { 00:00:07.445 [Pipeline] stage 00:00:07.447 [Pipeline] { (Prologue) 00:00:07.659 [Pipeline] sh 00:00:07.940 + logger -p user.info -t JENKINS-CI 00:00:07.956 [Pipeline] echo 00:00:07.957 Node: WFP21 00:00:07.963 [Pipeline] sh 00:00:08.258 [Pipeline] setCustomBuildProperty 00:00:08.270 [Pipeline] echo 00:00:08.271 Cleanup processes 00:00:08.274 [Pipeline] sh 00:00:08.552 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.552 1344593 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.563 [Pipeline] sh 00:00:08.844 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.844 ++ grep -v 'sudo pgrep' 00:00:08.844 ++ awk '{print $1}' 00:00:08.844 + sudo kill -9 00:00:08.844 + true 00:00:08.865 [Pipeline] cleanWs 00:00:08.874 [WS-CLEANUP] Deleting project workspace... 00:00:08.874 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.881 [WS-CLEANUP] done 00:00:08.886 [Pipeline] setCustomBuildProperty 00:00:08.903 [Pipeline] sh 00:00:09.187 + sudo git config --global --replace-all safe.directory '*' 00:00:09.245 [Pipeline] httpRequest 00:00:09.275 [Pipeline] echo 00:00:09.276 Sorcerer 10.211.164.101 is alive 00:00:09.281 [Pipeline] httpRequest 00:00:09.285 HttpMethod: GET 00:00:09.285 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.285 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.292 Response Code: HTTP/1.1 200 OK 00:00:09.293 Success: Status code 200 is in the accepted range: 200,404 00:00:09.293 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:14.431 [Pipeline] sh 00:00:14.716 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:14.733 [Pipeline] httpRequest 00:00:14.756 [Pipeline] echo 00:00:14.758 Sorcerer 10.211.164.101 is alive 00:00:14.767 [Pipeline] httpRequest 00:00:14.771 HttpMethod: GET 00:00:14.771 URL: http://10.211.164.101/packages/spdk_2da93d0d7ba4a6f1ce4127072b358f5ec42c6689.tar.gz 00:00:14.772 Sending request to url: http://10.211.164.101/packages/spdk_2da93d0d7ba4a6f1ce4127072b358f5ec42c6689.tar.gz 00:00:14.786 Response Code: HTTP/1.1 200 OK 00:00:14.787 Success: Status code 200 is in the accepted range: 200,404 00:00:14.788 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_2da93d0d7ba4a6f1ce4127072b358f5ec42c6689.tar.gz 00:01:14.926 [Pipeline] sh 00:01:15.242 + tar --no-same-owner -xf spdk_2da93d0d7ba4a6f1ce4127072b358f5ec42c6689.tar.gz 00:01:17.788 [Pipeline] sh 00:01:18.070 + git -C spdk log --oneline -n5 00:01:18.070 2da93d0d7 test/common: Include test/nvme in the reap_spdk_processes() lookup 00:01:18.070 719d03c6a sock/uring: only register net impl if supported 00:01:18.070 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:18.070 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:18.070 6c7c1f57e accel: add sequence outstanding stat 00:01:18.086 [Pipeline] } 00:01:18.106 [Pipeline] // stage 00:01:18.115 [Pipeline] stage 00:01:18.118 [Pipeline] { (Prepare) 00:01:18.135 [Pipeline] writeFile 00:01:18.148 [Pipeline] sh 00:01:18.428 + logger -p user.info -t JENKINS-CI 00:01:18.441 [Pipeline] sh 00:01:18.721 + logger -p user.info -t JENKINS-CI 00:01:18.732 [Pipeline] sh 00:01:19.015 + cat autorun-spdk.conf 00:01:19.015 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.015 SPDK_TEST_NVMF=1 00:01:19.015 SPDK_TEST_NVME_CLI=1 00:01:19.015 SPDK_TEST_NVMF_NICS=mlx5 00:01:19.015 SPDK_RUN_UBSAN=1 00:01:19.015 NET_TYPE=phy 00:01:19.021 RUN_NIGHTLY=0 00:01:19.028 [Pipeline] readFile 00:01:19.056 [Pipeline] withEnv 00:01:19.058 [Pipeline] { 00:01:19.072 [Pipeline] sh 00:01:19.357 + set -ex 00:01:19.357 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:19.357 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:19.357 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.357 ++ SPDK_TEST_NVMF=1 00:01:19.357 ++ SPDK_TEST_NVME_CLI=1 00:01:19.357 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:19.357 ++ SPDK_RUN_UBSAN=1 00:01:19.357 ++ NET_TYPE=phy 00:01:19.357 ++ RUN_NIGHTLY=0 00:01:19.357 + case $SPDK_TEST_NVMF_NICS in 00:01:19.357 + DRIVERS=mlx5_ib 00:01:19.357 + [[ -n mlx5_ib ]] 00:01:19.357 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.357 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:25.926 rmmod: ERROR: Module irdma is not currently loaded 00:01:25.926 rmmod: ERROR: Module i40iw is not currently loaded 00:01:25.926 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:25.926 + true 00:01:25.926 + for D in $DRIVERS 00:01:25.926 + sudo modprobe mlx5_ib 00:01:25.926 + exit 0 00:01:25.935 [Pipeline] } 00:01:25.951 [Pipeline] // withEnv 00:01:25.956 [Pipeline] } 00:01:25.970 [Pipeline] // stage 00:01:25.978 [Pipeline] catchError 00:01:25.979 [Pipeline] { 00:01:25.992 [Pipeline] timeout 00:01:25.993 Timeout set to expire in 1 hr 0 min 00:01:25.994 [Pipeline] { 00:01:26.010 [Pipeline] stage 00:01:26.012 [Pipeline] { (Tests) 00:01:26.026 [Pipeline] sh 00:01:26.309 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:26.309 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:26.309 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:26.309 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:26.309 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:26.309 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:26.309 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:26.309 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:26.309 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:26.309 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:26.309 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:26.309 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:26.309 + source /etc/os-release 00:01:26.309 ++ NAME='Fedora Linux' 00:01:26.309 ++ VERSION='38 (Cloud Edition)' 00:01:26.309 ++ ID=fedora 00:01:26.309 ++ VERSION_ID=38 00:01:26.309 ++ VERSION_CODENAME= 00:01:26.309 ++ PLATFORM_ID=platform:f38 00:01:26.309 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:26.309 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.309 ++ LOGO=fedora-logo-icon 00:01:26.309 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:26.309 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.309 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:26.309 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.309 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.309 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.309 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:26.309 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.309 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:26.309 ++ SUPPORT_END=2024-05-14 00:01:26.309 ++ VARIANT='Cloud Edition' 00:01:26.309 ++ VARIANT_ID=cloud 00:01:26.309 + uname -a 00:01:26.309 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:26.309 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:29.599 Hugepages 00:01:29.599 node hugesize free / total 00:01:29.599 node0 1048576kB 0 / 0 00:01:29.599 node0 2048kB 0 / 0 00:01:29.599 node1 1048576kB 0 / 0 00:01:29.599 node1 2048kB 0 / 0 00:01:29.599 00:01:29.599 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.599 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:29.599 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:29.599 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:29.599 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:29.599 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:29.599 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:29.599 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:29.600 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:29.600 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:29.600 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:29.600 + rm -f /tmp/spdk-ld-path 00:01:29.600 + source autorun-spdk.conf 00:01:29.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.600 ++ SPDK_TEST_NVMF=1 00:01:29.600 ++ SPDK_TEST_NVME_CLI=1 00:01:29.600 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:29.600 ++ SPDK_RUN_UBSAN=1 00:01:29.600 ++ NET_TYPE=phy 00:01:29.600 ++ RUN_NIGHTLY=0 00:01:29.600 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.600 + [[ -n '' ]] 00:01:29.600 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:29.600 + for M in /var/spdk/build-*-manifest.txt 00:01:29.600 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.600 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:29.600 + for M in /var/spdk/build-*-manifest.txt 00:01:29.600 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.600 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:29.600 ++ uname 00:01:29.600 + [[ Linux == \L\i\n\u\x ]] 00:01:29.600 + sudo dmesg -T 00:01:29.600 + sudo dmesg --clear 00:01:29.600 + dmesg_pid=1345673 00:01:29.600 + [[ Fedora Linux == FreeBSD ]] 00:01:29.600 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.600 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.600 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.600 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:29.600 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:29.600 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.600 + export FIO_BIN=/usr/src/fio-static/fio 00:01:29.600 + FIO_BIN=/usr/src/fio-static/fio 00:01:29.600 + sudo dmesg -Tw 00:01:29.600 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.600 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.600 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.600 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.600 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.600 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.600 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.600 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.600 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:29.600 Test configuration: 00:01:29.600 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.600 SPDK_TEST_NVMF=1 00:01:29.600 SPDK_TEST_NVME_CLI=1 00:01:29.600 SPDK_TEST_NVMF_NICS=mlx5 00:01:29.600 SPDK_RUN_UBSAN=1 00:01:29.600 NET_TYPE=phy 00:01:29.600 RUN_NIGHTLY=0 17:53:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:29.600 17:53:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.600 17:53:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.600 17:53:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.600 17:53:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.600 17:53:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.600 17:53:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.600 17:53:29 -- paths/export.sh@5 -- $ export PATH 00:01:29.600 17:53:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.600 17:53:29 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:29.600 17:53:29 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:29.600 17:53:29 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721058809.XXXXXX 00:01:29.600 17:53:29 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721058809.gAmx6w 00:01:29.600 17:53:29 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:29.600 17:53:29 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:29.600 17:53:29 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:29.600 17:53:29 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:29.600 17:53:29 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.600 17:53:29 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:29.600 17:53:29 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:29.600 17:53:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.600 17:53:29 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:29.600 17:53:29 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:29.600 17:53:29 -- pm/common@17 -- $ local monitor 00:01:29.600 17:53:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.600 17:53:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.600 17:53:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.600 17:53:29 -- pm/common@21 -- $ date +%s 00:01:29.600 17:53:29 -- pm/common@21 -- $ date +%s 00:01:29.600 17:53:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.600 17:53:29 -- pm/common@25 -- $ sleep 1 00:01:29.600 17:53:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721058809 00:01:29.600 17:53:29 -- pm/common@21 -- $ date +%s 00:01:29.600 17:53:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721058809 00:01:29.600 17:53:29 -- pm/common@21 -- $ date +%s 00:01:29.600 17:53:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721058809 00:01:29.600 17:53:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721058809 00:01:29.600 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721058809_collect-vmstat.pm.log 00:01:29.600 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721058809_collect-cpu-load.pm.log 00:01:29.600 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721058809_collect-cpu-temp.pm.log 00:01:29.600 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721058809_collect-bmc-pm.bmc.pm.log 00:01:30.535 17:53:30 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:30.535 17:53:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.535 17:53:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.535 17:53:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:30.535 17:53:30 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.535 Mon Jul 15 03:53:30 PM UTC 2024 00:01:30.535 17:53:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.535 v24.09-pre-203-g2da93d0d7 00:01:30.535 17:53:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:30.535 17:53:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.535 17:53:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.535 17:53:30 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:30.535 17:53:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.535 17:53:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.535 ************************************ 00:01:30.535 START TEST ubsan 00:01:30.535 ************************************ 00:01:30.535 17:53:30 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:30.535 using ubsan 00:01:30.535 00:01:30.535 real 0m0.001s 00:01:30.535 user 0m0.000s 00:01:30.535 sys 0m0.000s 00:01:30.535 17:53:30 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:30.535 17:53:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.535 ************************************ 00:01:30.535 END TEST ubsan 00:01:30.535 ************************************ 00:01:30.794 17:53:30 -- common/autotest_common.sh@1142 -- $ return 0 00:01:30.794 17:53:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.794 17:53:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.794 17:53:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.794 17:53:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.794 17:53:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.794 17:53:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.794 17:53:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.794 17:53:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.794 17:53:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:30.794 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:30.794 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:31.053 Using 'verbs' RDMA provider 00:01:46.914 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:59.134 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:59.134 Creating mk/config.mk...done. 00:01:59.134 Creating mk/cc.flags.mk...done. 00:01:59.134 Type 'make' to build. 00:01:59.134 17:53:58 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:59.134 17:53:58 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:59.134 17:53:58 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:59.134 17:53:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.134 ************************************ 00:01:59.134 START TEST make 00:01:59.134 ************************************ 00:01:59.134 17:53:58 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:59.134 make[1]: Nothing to be done for 'all'. 00:02:07.256 The Meson build system 00:02:07.256 Version: 1.3.1 00:02:07.256 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:07.256 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:07.256 Build type: native build 00:02:07.256 Program cat found: YES (/usr/bin/cat) 00:02:07.256 Project name: DPDK 00:02:07.256 Project version: 24.03.0 00:02:07.256 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:07.257 C linker for the host machine: cc ld.bfd 2.39-16 00:02:07.257 Host machine cpu family: x86_64 00:02:07.257 Host machine cpu: x86_64 00:02:07.257 Message: ## Building in Developer Mode ## 00:02:07.257 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:07.257 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:07.257 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:07.257 Program python3 found: YES (/usr/bin/python3) 00:02:07.257 Program cat found: YES (/usr/bin/cat) 00:02:07.257 Compiler for C supports arguments -march=native: YES 00:02:07.257 Checking for size of "void *" : 8 00:02:07.257 Checking for size of "void *" : 8 (cached) 00:02:07.257 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:07.257 Library m found: YES 00:02:07.257 Library numa found: YES 00:02:07.257 Has header "numaif.h" : YES 00:02:07.257 Library fdt found: NO 00:02:07.257 Library execinfo found: NO 00:02:07.257 Has header "execinfo.h" : YES 00:02:07.257 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:07.257 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:07.257 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:07.257 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:07.257 Run-time dependency openssl found: YES 3.0.9 00:02:07.257 Run-time dependency libpcap found: YES 1.10.4 00:02:07.257 Has header "pcap.h" with dependency libpcap: YES 00:02:07.257 Compiler for C supports arguments -Wcast-qual: YES 00:02:07.257 Compiler for C supports arguments -Wdeprecated: YES 00:02:07.257 Compiler for C supports arguments -Wformat: YES 00:02:07.257 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:07.257 Compiler for C supports arguments -Wformat-security: NO 00:02:07.257 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:07.257 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:07.257 Compiler for C supports arguments -Wnested-externs: YES 00:02:07.257 Compiler for C supports arguments -Wold-style-definition: YES 00:02:07.257 Compiler for C supports arguments -Wpointer-arith: YES 00:02:07.257 Compiler for C supports arguments -Wsign-compare: YES 00:02:07.257 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:07.257 Compiler for C supports arguments -Wundef: YES 00:02:07.257 Compiler for C supports arguments -Wwrite-strings: YES 00:02:07.257 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:07.257 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:07.257 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:07.257 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:07.257 Program objdump found: YES (/usr/bin/objdump) 00:02:07.257 Compiler for C supports arguments -mavx512f: YES 00:02:07.257 Checking if "AVX512 checking" compiles: YES 00:02:07.257 Fetching value of define "__SSE4_2__" : 1 00:02:07.257 Fetching value of define "__AES__" : 1 00:02:07.257 Fetching value of define "__AVX__" : 1 00:02:07.257 Fetching value of define "__AVX2__" : 1 00:02:07.257 Fetching value of define "__AVX512BW__" : 1 00:02:07.257 Fetching value of define "__AVX512CD__" : 1 00:02:07.257 Fetching value of define "__AVX512DQ__" : 1 00:02:07.257 Fetching value of define "__AVX512F__" : 1 00:02:07.257 Fetching value of define "__AVX512VL__" : 1 00:02:07.257 Fetching value of define "__PCLMUL__" : 1 00:02:07.257 Fetching value of define "__RDRND__" : 1 00:02:07.257 Fetching value of define "__RDSEED__" : 1 00:02:07.257 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:07.257 Fetching value of define "__znver1__" : (undefined) 00:02:07.257 Fetching value of define "__znver2__" : (undefined) 00:02:07.257 Fetching value of define "__znver3__" : (undefined) 00:02:07.257 Fetching value of define "__znver4__" : (undefined) 00:02:07.257 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:07.257 Message: lib/log: Defining dependency "log" 00:02:07.257 Message: lib/kvargs: Defining dependency "kvargs" 00:02:07.257 Message: lib/telemetry: Defining dependency "telemetry" 00:02:07.257 Checking for function "getentropy" : NO 00:02:07.257 Message: lib/eal: Defining dependency "eal" 00:02:07.257 Message: lib/ring: Defining dependency "ring" 00:02:07.257 Message: lib/rcu: Defining dependency "rcu" 00:02:07.257 Message: lib/mempool: Defining dependency "mempool" 00:02:07.257 Message: lib/mbuf: Defining dependency "mbuf" 00:02:07.257 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:07.257 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.257 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.257 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:07.257 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:07.257 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:07.257 Compiler for C supports arguments -mpclmul: YES 00:02:07.257 Compiler for C supports arguments -maes: YES 00:02:07.257 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.257 Compiler for C supports arguments -mavx512bw: YES 00:02:07.257 Compiler for C supports arguments -mavx512dq: YES 00:02:07.257 Compiler for C supports arguments -mavx512vl: YES 00:02:07.257 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:07.257 Compiler for C supports arguments -mavx2: YES 00:02:07.257 Compiler for C supports arguments -mavx: YES 00:02:07.257 Message: lib/net: Defining dependency "net" 00:02:07.257 Message: lib/meter: Defining dependency "meter" 00:02:07.257 Message: lib/ethdev: Defining dependency "ethdev" 00:02:07.257 Message: lib/pci: Defining dependency "pci" 00:02:07.257 Message: lib/cmdline: Defining dependency "cmdline" 00:02:07.257 Message: lib/hash: Defining dependency "hash" 00:02:07.257 Message: lib/timer: Defining dependency "timer" 00:02:07.257 Message: lib/compressdev: Defining dependency "compressdev" 00:02:07.257 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:07.257 Message: lib/dmadev: Defining dependency "dmadev" 00:02:07.257 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:07.257 Message: lib/power: Defining dependency "power" 00:02:07.257 Message: lib/reorder: Defining dependency "reorder" 00:02:07.257 Message: lib/security: Defining dependency "security" 00:02:07.257 Has header "linux/userfaultfd.h" : YES 00:02:07.257 Has header "linux/vduse.h" : YES 00:02:07.257 Message: lib/vhost: Defining dependency "vhost" 00:02:07.257 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.257 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.257 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.257 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.257 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:07.257 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:07.257 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:07.257 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:07.257 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:07.257 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:07.257 Program doxygen found: YES (/usr/bin/doxygen) 00:02:07.257 Configuring doxy-api-html.conf using configuration 00:02:07.257 Configuring doxy-api-man.conf using configuration 00:02:07.257 Program mandb found: YES (/usr/bin/mandb) 00:02:07.257 Program sphinx-build found: NO 00:02:07.257 Configuring rte_build_config.h using configuration 00:02:07.257 Message: 00:02:07.257 ================= 00:02:07.257 Applications Enabled 00:02:07.257 ================= 00:02:07.257 00:02:07.257 apps: 00:02:07.257 00:02:07.257 00:02:07.257 Message: 00:02:07.257 ================= 00:02:07.257 Libraries Enabled 00:02:07.257 ================= 00:02:07.257 00:02:07.257 libs: 00:02:07.257 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.257 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:07.257 cryptodev, dmadev, power, reorder, security, vhost, 00:02:07.257 00:02:07.257 Message: 00:02:07.257 =============== 00:02:07.257 Drivers Enabled 00:02:07.257 =============== 00:02:07.257 00:02:07.257 common: 00:02:07.257 00:02:07.257 bus: 00:02:07.257 pci, vdev, 00:02:07.257 mempool: 00:02:07.257 ring, 00:02:07.257 dma: 00:02:07.257 00:02:07.257 net: 00:02:07.257 00:02:07.257 crypto: 00:02:07.257 00:02:07.257 compress: 00:02:07.257 00:02:07.257 vdpa: 00:02:07.257 00:02:07.257 00:02:07.257 Message: 00:02:07.257 ================= 00:02:07.257 Content Skipped 00:02:07.257 ================= 00:02:07.257 00:02:07.257 apps: 00:02:07.257 dumpcap: explicitly disabled via build config 00:02:07.257 graph: explicitly disabled via build config 00:02:07.257 pdump: explicitly disabled via build config 00:02:07.257 proc-info: explicitly disabled via build config 00:02:07.257 test-acl: explicitly disabled via build config 00:02:07.257 test-bbdev: explicitly disabled via build config 00:02:07.257 test-cmdline: explicitly disabled via build config 00:02:07.257 test-compress-perf: explicitly disabled via build config 00:02:07.257 test-crypto-perf: explicitly disabled via build config 00:02:07.257 test-dma-perf: explicitly disabled via build config 00:02:07.257 test-eventdev: explicitly disabled via build config 00:02:07.257 test-fib: explicitly disabled via build config 00:02:07.257 test-flow-perf: explicitly disabled via build config 00:02:07.257 test-gpudev: explicitly disabled via build config 00:02:07.257 test-mldev: explicitly disabled via build config 00:02:07.257 test-pipeline: explicitly disabled via build config 00:02:07.257 test-pmd: explicitly disabled via build config 00:02:07.257 test-regex: explicitly disabled via build config 00:02:07.257 test-sad: explicitly disabled via build config 00:02:07.257 test-security-perf: explicitly disabled via build config 00:02:07.257 00:02:07.257 libs: 00:02:07.257 argparse: explicitly disabled via build config 00:02:07.257 metrics: explicitly disabled via build config 00:02:07.257 acl: explicitly disabled via build config 00:02:07.257 bbdev: explicitly disabled via build config 00:02:07.257 bitratestats: explicitly disabled via build config 00:02:07.257 bpf: explicitly disabled via build config 00:02:07.257 cfgfile: explicitly disabled via build config 00:02:07.257 distributor: explicitly disabled via build config 00:02:07.257 efd: explicitly disabled via build config 00:02:07.257 eventdev: explicitly disabled via build config 00:02:07.257 dispatcher: explicitly disabled via build config 00:02:07.257 gpudev: explicitly disabled via build config 00:02:07.257 gro: explicitly disabled via build config 00:02:07.257 gso: explicitly disabled via build config 00:02:07.257 ip_frag: explicitly disabled via build config 00:02:07.257 jobstats: explicitly disabled via build config 00:02:07.257 latencystats: explicitly disabled via build config 00:02:07.257 lpm: explicitly disabled via build config 00:02:07.257 member: explicitly disabled via build config 00:02:07.258 pcapng: explicitly disabled via build config 00:02:07.258 rawdev: explicitly disabled via build config 00:02:07.258 regexdev: explicitly disabled via build config 00:02:07.258 mldev: explicitly disabled via build config 00:02:07.258 rib: explicitly disabled via build config 00:02:07.258 sched: explicitly disabled via build config 00:02:07.258 stack: explicitly disabled via build config 00:02:07.258 ipsec: explicitly disabled via build config 00:02:07.258 pdcp: explicitly disabled via build config 00:02:07.258 fib: explicitly disabled via build config 00:02:07.258 port: explicitly disabled via build config 00:02:07.258 pdump: explicitly disabled via build config 00:02:07.258 table: explicitly disabled via build config 00:02:07.258 pipeline: explicitly disabled via build config 00:02:07.258 graph: explicitly disabled via build config 00:02:07.258 node: explicitly disabled via build config 00:02:07.258 00:02:07.258 drivers: 00:02:07.258 common/cpt: not in enabled drivers build config 00:02:07.258 common/dpaax: not in enabled drivers build config 00:02:07.258 common/iavf: not in enabled drivers build config 00:02:07.258 common/idpf: not in enabled drivers build config 00:02:07.258 common/ionic: not in enabled drivers build config 00:02:07.258 common/mvep: not in enabled drivers build config 00:02:07.258 common/octeontx: not in enabled drivers build config 00:02:07.258 bus/auxiliary: not in enabled drivers build config 00:02:07.258 bus/cdx: not in enabled drivers build config 00:02:07.258 bus/dpaa: not in enabled drivers build config 00:02:07.258 bus/fslmc: not in enabled drivers build config 00:02:07.258 bus/ifpga: not in enabled drivers build config 00:02:07.258 bus/platform: not in enabled drivers build config 00:02:07.258 bus/uacce: not in enabled drivers build config 00:02:07.258 bus/vmbus: not in enabled drivers build config 00:02:07.258 common/cnxk: not in enabled drivers build config 00:02:07.258 common/mlx5: not in enabled drivers build config 00:02:07.258 common/nfp: not in enabled drivers build config 00:02:07.258 common/nitrox: not in enabled drivers build config 00:02:07.258 common/qat: not in enabled drivers build config 00:02:07.258 common/sfc_efx: not in enabled drivers build config 00:02:07.258 mempool/bucket: not in enabled drivers build config 00:02:07.258 mempool/cnxk: not in enabled drivers build config 00:02:07.258 mempool/dpaa: not in enabled drivers build config 00:02:07.258 mempool/dpaa2: not in enabled drivers build config 00:02:07.258 mempool/octeontx: not in enabled drivers build config 00:02:07.258 mempool/stack: not in enabled drivers build config 00:02:07.258 dma/cnxk: not in enabled drivers build config 00:02:07.258 dma/dpaa: not in enabled drivers build config 00:02:07.258 dma/dpaa2: not in enabled drivers build config 00:02:07.258 dma/hisilicon: not in enabled drivers build config 00:02:07.258 dma/idxd: not in enabled drivers build config 00:02:07.258 dma/ioat: not in enabled drivers build config 00:02:07.258 dma/skeleton: not in enabled drivers build config 00:02:07.258 net/af_packet: not in enabled drivers build config 00:02:07.258 net/af_xdp: not in enabled drivers build config 00:02:07.258 net/ark: not in enabled drivers build config 00:02:07.258 net/atlantic: not in enabled drivers build config 00:02:07.258 net/avp: not in enabled drivers build config 00:02:07.258 net/axgbe: not in enabled drivers build config 00:02:07.258 net/bnx2x: not in enabled drivers build config 00:02:07.258 net/bnxt: not in enabled drivers build config 00:02:07.258 net/bonding: not in enabled drivers build config 00:02:07.258 net/cnxk: not in enabled drivers build config 00:02:07.258 net/cpfl: not in enabled drivers build config 00:02:07.258 net/cxgbe: not in enabled drivers build config 00:02:07.258 net/dpaa: not in enabled drivers build config 00:02:07.258 net/dpaa2: not in enabled drivers build config 00:02:07.258 net/e1000: not in enabled drivers build config 00:02:07.258 net/ena: not in enabled drivers build config 00:02:07.258 net/enetc: not in enabled drivers build config 00:02:07.258 net/enetfec: not in enabled drivers build config 00:02:07.258 net/enic: not in enabled drivers build config 00:02:07.258 net/failsafe: not in enabled drivers build config 00:02:07.258 net/fm10k: not in enabled drivers build config 00:02:07.258 net/gve: not in enabled drivers build config 00:02:07.258 net/hinic: not in enabled drivers build config 00:02:07.258 net/hns3: not in enabled drivers build config 00:02:07.258 net/i40e: not in enabled drivers build config 00:02:07.258 net/iavf: not in enabled drivers build config 00:02:07.258 net/ice: not in enabled drivers build config 00:02:07.258 net/idpf: not in enabled drivers build config 00:02:07.258 net/igc: not in enabled drivers build config 00:02:07.258 net/ionic: not in enabled drivers build config 00:02:07.258 net/ipn3ke: not in enabled drivers build config 00:02:07.258 net/ixgbe: not in enabled drivers build config 00:02:07.258 net/mana: not in enabled drivers build config 00:02:07.258 net/memif: not in enabled drivers build config 00:02:07.258 net/mlx4: not in enabled drivers build config 00:02:07.258 net/mlx5: not in enabled drivers build config 00:02:07.258 net/mvneta: not in enabled drivers build config 00:02:07.258 net/mvpp2: not in enabled drivers build config 00:02:07.258 net/netvsc: not in enabled drivers build config 00:02:07.258 net/nfb: not in enabled drivers build config 00:02:07.258 net/nfp: not in enabled drivers build config 00:02:07.258 net/ngbe: not in enabled drivers build config 00:02:07.258 net/null: not in enabled drivers build config 00:02:07.258 net/octeontx: not in enabled drivers build config 00:02:07.258 net/octeon_ep: not in enabled drivers build config 00:02:07.258 net/pcap: not in enabled drivers build config 00:02:07.258 net/pfe: not in enabled drivers build config 00:02:07.258 net/qede: not in enabled drivers build config 00:02:07.258 net/ring: not in enabled drivers build config 00:02:07.258 net/sfc: not in enabled drivers build config 00:02:07.258 net/softnic: not in enabled drivers build config 00:02:07.258 net/tap: not in enabled drivers build config 00:02:07.258 net/thunderx: not in enabled drivers build config 00:02:07.258 net/txgbe: not in enabled drivers build config 00:02:07.258 net/vdev_netvsc: not in enabled drivers build config 00:02:07.258 net/vhost: not in enabled drivers build config 00:02:07.258 net/virtio: not in enabled drivers build config 00:02:07.258 net/vmxnet3: not in enabled drivers build config 00:02:07.258 raw/*: missing internal dependency, "rawdev" 00:02:07.258 crypto/armv8: not in enabled drivers build config 00:02:07.258 crypto/bcmfs: not in enabled drivers build config 00:02:07.258 crypto/caam_jr: not in enabled drivers build config 00:02:07.258 crypto/ccp: not in enabled drivers build config 00:02:07.258 crypto/cnxk: not in enabled drivers build config 00:02:07.258 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.258 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.258 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.258 crypto/mlx5: not in enabled drivers build config 00:02:07.258 crypto/mvsam: not in enabled drivers build config 00:02:07.258 crypto/nitrox: not in enabled drivers build config 00:02:07.258 crypto/null: not in enabled drivers build config 00:02:07.258 crypto/octeontx: not in enabled drivers build config 00:02:07.258 crypto/openssl: not in enabled drivers build config 00:02:07.258 crypto/scheduler: not in enabled drivers build config 00:02:07.258 crypto/uadk: not in enabled drivers build config 00:02:07.258 crypto/virtio: not in enabled drivers build config 00:02:07.258 compress/isal: not in enabled drivers build config 00:02:07.258 compress/mlx5: not in enabled drivers build config 00:02:07.258 compress/nitrox: not in enabled drivers build config 00:02:07.258 compress/octeontx: not in enabled drivers build config 00:02:07.258 compress/zlib: not in enabled drivers build config 00:02:07.258 regex/*: missing internal dependency, "regexdev" 00:02:07.258 ml/*: missing internal dependency, "mldev" 00:02:07.258 vdpa/ifc: not in enabled drivers build config 00:02:07.258 vdpa/mlx5: not in enabled drivers build config 00:02:07.258 vdpa/nfp: not in enabled drivers build config 00:02:07.258 vdpa/sfc: not in enabled drivers build config 00:02:07.258 event/*: missing internal dependency, "eventdev" 00:02:07.258 baseband/*: missing internal dependency, "bbdev" 00:02:07.258 gpu/*: missing internal dependency, "gpudev" 00:02:07.258 00:02:07.258 00:02:07.258 Build targets in project: 85 00:02:07.258 00:02:07.258 DPDK 24.03.0 00:02:07.258 00:02:07.258 User defined options 00:02:07.258 buildtype : debug 00:02:07.258 default_library : shared 00:02:07.258 libdir : lib 00:02:07.258 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:07.258 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:07.258 c_link_args : 00:02:07.258 cpu_instruction_set: native 00:02:07.258 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:07.258 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:07.258 enable_docs : false 00:02:07.258 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:07.258 enable_kmods : false 00:02:07.258 max_lcores : 128 00:02:07.258 tests : false 00:02:07.258 00:02:07.258 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.518 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:07.787 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.787 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.787 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.787 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.787 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.787 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.787 [7/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.787 [8/268] Linking static target lib/librte_kvargs.a 00:02:07.787 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.787 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.787 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.787 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:07.787 [13/268] Linking static target lib/librte_log.a 00:02:07.787 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.787 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:07.787 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:07.787 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.787 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.787 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.046 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.046 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.046 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.046 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.046 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.046 [25/268] Linking static target lib/librte_pci.a 00:02:08.046 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.046 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.046 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:08.046 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.046 [30/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:08.046 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.046 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:08.046 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.046 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.304 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.304 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.304 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.304 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.304 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.304 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.304 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.304 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:08.304 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.304 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.304 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.304 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.304 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.304 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.304 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.304 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.304 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.304 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.304 [53/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.304 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.304 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:08.304 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.304 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.304 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.304 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.304 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.304 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.304 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.304 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.304 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.304 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.304 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.304 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.304 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.304 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.304 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.304 [71/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.304 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.304 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:08.304 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.305 [75/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.305 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.305 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.305 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.305 [79/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.305 [80/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.305 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.305 [82/268] Linking static target lib/librte_meter.a 00:02:08.305 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.305 [84/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.305 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.305 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.305 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.305 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.305 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:08.305 [90/268] Linking static target lib/librte_telemetry.a 00:02:08.305 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:08.305 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.305 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.305 [94/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:08.305 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.305 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.305 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.305 [98/268] Linking static target lib/librte_ring.a 00:02:08.305 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.305 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.305 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.305 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:08.305 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.305 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.305 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:08.564 [106/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:08.564 [107/268] Linking static target lib/librte_cmdline.a 00:02:08.564 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.564 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:08.564 [110/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:08.564 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.564 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.564 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:08.564 [114/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:08.564 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.564 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.564 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.564 [118/268] Linking static target lib/librte_timer.a 00:02:08.564 [119/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.564 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:08.564 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.564 [122/268] Linking static target lib/librte_mempool.a 00:02:08.564 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.564 [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.564 [125/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.564 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.564 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.564 [128/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.564 [129/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.564 [130/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:08.564 [131/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.564 [132/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.564 [133/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:08.564 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.564 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.564 [136/268] Linking static target lib/librte_net.a 00:02:08.564 [137/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.564 [138/268] Linking static target lib/librte_rcu.a 00:02:08.564 [139/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.564 [140/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:08.564 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.564 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:08.564 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.564 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:08.564 [145/268] Linking static target lib/librte_eal.a 00:02:08.564 [146/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.564 [147/268] Linking static target lib/librte_compressdev.a 00:02:08.564 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.564 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.564 [150/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.564 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.564 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.564 [153/268] Linking static target lib/librte_dmadev.a 00:02:08.564 [154/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.564 [155/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:08.564 [156/268] Linking target lib/librte_log.so.24.1 00:02:08.564 [157/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.564 [158/268] Linking static target lib/librte_mbuf.a 00:02:08.822 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:08.823 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.823 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:08.823 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:08.823 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:08.823 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:08.823 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:08.823 [166/268] Linking static target lib/librte_power.a 00:02:08.823 [167/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.823 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:08.823 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:08.823 [170/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:08.823 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:08.823 [172/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:08.823 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.823 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:08.823 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:08.823 [176/268] Linking static target lib/librte_hash.a 00:02:08.823 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:08.823 [178/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:08.823 [179/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.823 [180/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:08.823 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:08.823 [182/268] Linking static target lib/librte_reorder.a 00:02:08.823 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:08.823 [184/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.823 [185/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.823 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:08.823 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:08.823 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.823 [189/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.823 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:08.823 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:08.823 [192/268] Linking static target lib/librte_security.a 00:02:08.823 [193/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:08.823 [194/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.823 [195/268] Linking static target lib/librte_cryptodev.a 00:02:08.823 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:09.082 [197/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.082 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:09.082 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.082 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:09.082 [201/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.082 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:09.082 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:09.082 [204/268] Linking target lib/librte_telemetry.so.24.1 00:02:09.082 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.082 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:09.082 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:09.082 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:09.082 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:09.082 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.082 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:09.082 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:09.082 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.341 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.341 [215/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.341 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.341 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.341 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.341 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.599 [220/268] Linking static target lib/librte_ethdev.a 00:02:09.599 [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.599 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.599 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.856 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.856 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.856 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.856 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.424 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.424 [229/268] Linking static target lib/librte_vhost.a 00:02:10.992 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.969 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.557 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.934 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.934 [234/268] Linking target lib/librte_eal.so.24.1 00:02:21.192 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.192 [236/268] Linking target lib/librte_pci.so.24.1 00:02:21.192 [237/268] Linking target lib/librte_ring.so.24.1 00:02:21.192 [238/268] Linking target lib/librte_meter.so.24.1 00:02:21.192 [239/268] Linking target lib/librte_timer.so.24.1 00:02:21.192 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.192 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.450 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.450 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.450 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.450 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.450 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.450 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:21.450 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:21.450 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.450 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:21.450 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.708 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:21.708 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:21.708 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:21.708 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:21.708 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:21.708 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:21.708 [258/268] Linking target lib/librte_net.so.24.1 00:02:21.967 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:21.967 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:21.967 [261/268] Linking target lib/librte_hash.so.24.1 00:02:21.967 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:21.967 [263/268] Linking target lib/librte_security.so.24.1 00:02:21.967 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:22.226 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:22.226 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:22.226 [267/268] Linking target lib/librte_power.so.24.1 00:02:22.226 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:22.226 INFO: autodetecting backend as ninja 00:02:22.226 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:23.162 CC lib/ut/ut.o 00:02:23.420 CC lib/log/log.o 00:02:23.420 CC lib/log/log_flags.o 00:02:23.420 CC lib/log/log_deprecated.o 00:02:23.420 CC lib/ut_mock/mock.o 00:02:23.420 LIB libspdk_ut.a 00:02:23.420 LIB libspdk_log.a 00:02:23.420 SO libspdk_ut.so.2.0 00:02:23.420 LIB libspdk_ut_mock.a 00:02:23.420 SO libspdk_log.so.7.0 00:02:23.420 SO libspdk_ut_mock.so.6.0 00:02:23.420 SYMLINK libspdk_ut.so 00:02:23.678 SYMLINK libspdk_log.so 00:02:23.678 SYMLINK libspdk_ut_mock.so 00:02:23.936 CC lib/ioat/ioat.o 00:02:23.936 CXX lib/trace_parser/trace.o 00:02:23.936 CC lib/util/base64.o 00:02:23.936 CC lib/util/bit_array.o 00:02:23.936 CC lib/util/crc16.o 00:02:23.936 CC lib/util/cpuset.o 00:02:23.936 CC lib/util/crc32c.o 00:02:23.936 CC lib/util/crc32.o 00:02:23.936 CC lib/util/crc32_ieee.o 00:02:23.936 CC lib/util/crc64.o 00:02:23.936 CC lib/util/dif.o 00:02:23.936 CC lib/util/fd.o 00:02:23.936 CC lib/util/file.o 00:02:23.936 CC lib/dma/dma.o 00:02:23.936 CC lib/util/hexlify.o 00:02:23.936 CC lib/util/iov.o 00:02:23.936 CC lib/util/strerror_tls.o 00:02:23.936 CC lib/util/math.o 00:02:23.936 CC lib/util/pipe.o 00:02:23.936 CC lib/util/string.o 00:02:23.936 CC lib/util/uuid.o 00:02:23.936 CC lib/util/fd_group.o 00:02:23.936 CC lib/util/xor.o 00:02:23.936 CC lib/util/zipf.o 00:02:24.193 CC lib/vfio_user/host/vfio_user_pci.o 00:02:24.193 CC lib/vfio_user/host/vfio_user.o 00:02:24.193 LIB libspdk_dma.a 00:02:24.193 LIB libspdk_ioat.a 00:02:24.193 SO libspdk_dma.so.4.0 00:02:24.193 SO libspdk_ioat.so.7.0 00:02:24.193 SYMLINK libspdk_dma.so 00:02:24.193 SYMLINK libspdk_ioat.so 00:02:24.193 LIB libspdk_vfio_user.a 00:02:24.451 LIB libspdk_util.a 00:02:24.451 SO libspdk_vfio_user.so.5.0 00:02:24.451 SO libspdk_util.so.9.1 00:02:24.451 SYMLINK libspdk_vfio_user.so 00:02:24.451 SYMLINK libspdk_util.so 00:02:24.451 LIB libspdk_trace_parser.a 00:02:24.709 SO libspdk_trace_parser.so.5.0 00:02:24.709 SYMLINK libspdk_trace_parser.so 00:02:24.967 CC lib/env_dpdk/env.o 00:02:24.967 CC lib/env_dpdk/init.o 00:02:24.967 CC lib/env_dpdk/memory.o 00:02:24.967 CC lib/env_dpdk/pci.o 00:02:24.967 CC lib/env_dpdk/threads.o 00:02:24.968 CC lib/env_dpdk/pci_ioat.o 00:02:24.968 CC lib/rdma_provider/common.o 00:02:24.968 CC lib/env_dpdk/pci_virtio.o 00:02:24.968 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:24.968 CC lib/env_dpdk/pci_vmd.o 00:02:24.968 CC lib/json/json_parse.o 00:02:24.968 CC lib/env_dpdk/pci_idxd.o 00:02:24.968 CC lib/conf/conf.o 00:02:24.968 CC lib/json/json_util.o 00:02:24.968 CC lib/env_dpdk/pci_event.o 00:02:24.968 CC lib/json/json_write.o 00:02:24.968 CC lib/env_dpdk/sigbus_handler.o 00:02:24.968 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.968 CC lib/env_dpdk/pci_dpdk.o 00:02:24.968 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.968 CC lib/vmd/vmd.o 00:02:24.968 CC lib/vmd/led.o 00:02:24.968 CC lib/idxd/idxd.o 00:02:24.968 CC lib/idxd/idxd_user.o 00:02:24.968 CC lib/idxd/idxd_kernel.o 00:02:24.968 CC lib/rdma_utils/rdma_utils.o 00:02:24.968 LIB libspdk_rdma_provider.a 00:02:25.226 LIB libspdk_conf.a 00:02:25.226 SO libspdk_rdma_provider.so.6.0 00:02:25.226 SO libspdk_conf.so.6.0 00:02:25.226 LIB libspdk_json.a 00:02:25.226 LIB libspdk_rdma_utils.a 00:02:25.226 SYMLINK libspdk_rdma_provider.so 00:02:25.226 SO libspdk_json.so.6.0 00:02:25.226 SYMLINK libspdk_conf.so 00:02:25.226 SO libspdk_rdma_utils.so.1.0 00:02:25.226 SYMLINK libspdk_json.so 00:02:25.226 SYMLINK libspdk_rdma_utils.so 00:02:25.226 LIB libspdk_idxd.a 00:02:25.484 LIB libspdk_vmd.a 00:02:25.484 SO libspdk_idxd.so.12.0 00:02:25.484 SO libspdk_vmd.so.6.0 00:02:25.484 SYMLINK libspdk_idxd.so 00:02:25.484 SYMLINK libspdk_vmd.so 00:02:25.743 CC lib/jsonrpc/jsonrpc_server.o 00:02:25.743 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:25.743 CC lib/jsonrpc/jsonrpc_client.o 00:02:25.743 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:25.743 LIB libspdk_jsonrpc.a 00:02:26.002 LIB libspdk_env_dpdk.a 00:02:26.002 SO libspdk_jsonrpc.so.6.0 00:02:26.002 SO libspdk_env_dpdk.so.14.1 00:02:26.002 SYMLINK libspdk_jsonrpc.so 00:02:26.002 SYMLINK libspdk_env_dpdk.so 00:02:26.260 CC lib/rpc/rpc.o 00:02:26.519 LIB libspdk_rpc.a 00:02:26.519 SO libspdk_rpc.so.6.0 00:02:26.519 SYMLINK libspdk_rpc.so 00:02:27.085 CC lib/trace/trace_rpc.o 00:02:27.085 CC lib/trace/trace.o 00:02:27.085 CC lib/trace/trace_flags.o 00:02:27.085 CC lib/notify/notify.o 00:02:27.085 CC lib/notify/notify_rpc.o 00:02:27.085 CC lib/keyring/keyring.o 00:02:27.085 CC lib/keyring/keyring_rpc.o 00:02:27.085 LIB libspdk_notify.a 00:02:27.085 LIB libspdk_trace.a 00:02:27.085 SO libspdk_notify.so.6.0 00:02:27.085 LIB libspdk_keyring.a 00:02:27.344 SO libspdk_trace.so.10.0 00:02:27.344 SYMLINK libspdk_notify.so 00:02:27.344 SO libspdk_keyring.so.1.0 00:02:27.344 SYMLINK libspdk_trace.so 00:02:27.344 SYMLINK libspdk_keyring.so 00:02:27.603 CC lib/sock/sock.o 00:02:27.603 CC lib/sock/sock_rpc.o 00:02:27.603 CC lib/thread/thread.o 00:02:27.603 CC lib/thread/iobuf.o 00:02:27.861 LIB libspdk_sock.a 00:02:27.861 SO libspdk_sock.so.10.0 00:02:28.120 SYMLINK libspdk_sock.so 00:02:28.377 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:28.377 CC lib/nvme/nvme_ns_cmd.o 00:02:28.377 CC lib/nvme/nvme_ctrlr.o 00:02:28.377 CC lib/nvme/nvme_fabric.o 00:02:28.377 CC lib/nvme/nvme_pcie_common.o 00:02:28.377 CC lib/nvme/nvme_ns.o 00:02:28.377 CC lib/nvme/nvme_qpair.o 00:02:28.377 CC lib/nvme/nvme_pcie.o 00:02:28.377 CC lib/nvme/nvme_quirks.o 00:02:28.377 CC lib/nvme/nvme_transport.o 00:02:28.377 CC lib/nvme/nvme.o 00:02:28.377 CC lib/nvme/nvme_discovery.o 00:02:28.377 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:28.377 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:28.377 CC lib/nvme/nvme_tcp.o 00:02:28.377 CC lib/nvme/nvme_opal.o 00:02:28.377 CC lib/nvme/nvme_io_msg.o 00:02:28.377 CC lib/nvme/nvme_poll_group.o 00:02:28.377 CC lib/nvme/nvme_zns.o 00:02:28.377 CC lib/nvme/nvme_stubs.o 00:02:28.377 CC lib/nvme/nvme_auth.o 00:02:28.377 CC lib/nvme/nvme_rdma.o 00:02:28.377 CC lib/nvme/nvme_cuse.o 00:02:28.635 LIB libspdk_thread.a 00:02:28.635 SO libspdk_thread.so.10.1 00:02:28.893 SYMLINK libspdk_thread.so 00:02:29.151 CC lib/virtio/virtio_vfio_user.o 00:02:29.152 CC lib/virtio/virtio.o 00:02:29.152 CC lib/virtio/virtio_vhost_user.o 00:02:29.152 CC lib/accel/accel.o 00:02:29.152 CC lib/virtio/virtio_pci.o 00:02:29.152 CC lib/accel/accel_rpc.o 00:02:29.152 CC lib/accel/accel_sw.o 00:02:29.152 CC lib/init/json_config.o 00:02:29.152 CC lib/init/subsystem.o 00:02:29.152 CC lib/init/subsystem_rpc.o 00:02:29.152 CC lib/init/rpc.o 00:02:29.152 CC lib/blob/blobstore.o 00:02:29.152 CC lib/blob/blob_bs_dev.o 00:02:29.152 CC lib/blob/request.o 00:02:29.152 CC lib/blob/zeroes.o 00:02:29.410 LIB libspdk_init.a 00:02:29.410 SO libspdk_init.so.5.0 00:02:29.410 LIB libspdk_virtio.a 00:02:29.410 SO libspdk_virtio.so.7.0 00:02:29.410 SYMLINK libspdk_init.so 00:02:29.667 SYMLINK libspdk_virtio.so 00:02:29.926 CC lib/event/app.o 00:02:29.926 CC lib/event/reactor.o 00:02:29.926 CC lib/event/log_rpc.o 00:02:29.926 CC lib/event/app_rpc.o 00:02:29.926 CC lib/event/scheduler_static.o 00:02:29.926 LIB libspdk_accel.a 00:02:29.926 SO libspdk_accel.so.15.1 00:02:29.926 LIB libspdk_nvme.a 00:02:29.926 SYMLINK libspdk_accel.so 00:02:29.926 SO libspdk_nvme.so.13.1 00:02:30.217 LIB libspdk_event.a 00:02:30.217 SO libspdk_event.so.14.0 00:02:30.217 SYMLINK libspdk_event.so 00:02:30.217 CC lib/bdev/bdev.o 00:02:30.217 CC lib/bdev/bdev_rpc.o 00:02:30.217 CC lib/bdev/bdev_zone.o 00:02:30.217 CC lib/bdev/part.o 00:02:30.217 CC lib/bdev/scsi_nvme.o 00:02:30.217 SYMLINK libspdk_nvme.so 00:02:31.153 LIB libspdk_blob.a 00:02:31.413 SO libspdk_blob.so.11.0 00:02:31.413 SYMLINK libspdk_blob.so 00:02:31.672 CC lib/blobfs/blobfs.o 00:02:31.672 CC lib/lvol/lvol.o 00:02:31.672 CC lib/blobfs/tree.o 00:02:31.931 LIB libspdk_bdev.a 00:02:32.191 SO libspdk_bdev.so.15.1 00:02:32.191 SYMLINK libspdk_bdev.so 00:02:32.191 LIB libspdk_blobfs.a 00:02:32.450 SO libspdk_blobfs.so.10.0 00:02:32.450 LIB libspdk_lvol.a 00:02:32.450 SO libspdk_lvol.so.10.0 00:02:32.450 SYMLINK libspdk_blobfs.so 00:02:32.450 CC lib/nbd/nbd.o 00:02:32.450 CC lib/nbd/nbd_rpc.o 00:02:32.450 SYMLINK libspdk_lvol.so 00:02:32.450 CC lib/scsi/dev.o 00:02:32.450 CC lib/scsi/lun.o 00:02:32.450 CC lib/scsi/port.o 00:02:32.450 CC lib/scsi/scsi.o 00:02:32.450 CC lib/scsi/scsi_bdev.o 00:02:32.450 CC lib/scsi/scsi_pr.o 00:02:32.450 CC lib/scsi/scsi_rpc.o 00:02:32.450 CC lib/scsi/task.o 00:02:32.450 CC lib/ublk/ublk.o 00:02:32.450 CC lib/ublk/ublk_rpc.o 00:02:32.450 CC lib/ftl/ftl_core.o 00:02:32.450 CC lib/ftl/ftl_layout.o 00:02:32.450 CC lib/ftl/ftl_debug.o 00:02:32.450 CC lib/ftl/ftl_init.o 00:02:32.450 CC lib/nvmf/ctrlr.o 00:02:32.450 CC lib/nvmf/ctrlr_discovery.o 00:02:32.450 CC lib/ftl/ftl_io.o 00:02:32.450 CC lib/nvmf/ctrlr_bdev.o 00:02:32.450 CC lib/ftl/ftl_sb.o 00:02:32.450 CC lib/nvmf/subsystem.o 00:02:32.450 CC lib/nvmf/nvmf.o 00:02:32.450 CC lib/ftl/ftl_l2p.o 00:02:32.450 CC lib/ftl/ftl_l2p_flat.o 00:02:32.450 CC lib/nvmf/nvmf_rpc.o 00:02:32.450 CC lib/ftl/ftl_nv_cache.o 00:02:32.450 CC lib/ftl/ftl_band.o 00:02:32.450 CC lib/nvmf/transport.o 00:02:32.450 CC lib/ftl/ftl_band_ops.o 00:02:32.450 CC lib/ftl/ftl_reloc.o 00:02:32.450 CC lib/nvmf/tcp.o 00:02:32.450 CC lib/ftl/ftl_writer.o 00:02:32.450 CC lib/nvmf/stubs.o 00:02:32.450 CC lib/ftl/ftl_rq.o 00:02:32.450 CC lib/nvmf/rdma.o 00:02:32.450 CC lib/nvmf/mdns_server.o 00:02:32.450 CC lib/ftl/ftl_l2p_cache.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.450 CC lib/ftl/ftl_p2l.o 00:02:32.450 CC lib/nvmf/auth.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:32.450 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:32.450 CC lib/ftl/utils/ftl_conf.o 00:02:32.450 CC lib/ftl/utils/ftl_md.o 00:02:32.450 CC lib/ftl/utils/ftl_mempool.o 00:02:32.450 CC lib/ftl/utils/ftl_bitmap.o 00:02:32.450 CC lib/ftl/utils/ftl_property.o 00:02:32.729 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:32.729 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:32.729 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:32.729 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:32.729 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:32.729 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:32.729 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:32.729 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:32.729 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:32.729 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:32.729 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:32.729 CC lib/ftl/base/ftl_base_dev.o 00:02:32.729 CC lib/ftl/base/ftl_base_bdev.o 00:02:32.729 CC lib/ftl/ftl_trace.o 00:02:32.988 LIB libspdk_nbd.a 00:02:32.988 SO libspdk_nbd.so.7.0 00:02:32.988 SYMLINK libspdk_nbd.so 00:02:32.988 LIB libspdk_scsi.a 00:02:33.246 SO libspdk_scsi.so.9.0 00:02:33.246 LIB libspdk_ublk.a 00:02:33.246 SO libspdk_ublk.so.3.0 00:02:33.246 SYMLINK libspdk_scsi.so 00:02:33.246 SYMLINK libspdk_ublk.so 00:02:33.506 LIB libspdk_ftl.a 00:02:33.506 CC lib/iscsi/conn.o 00:02:33.506 CC lib/iscsi/init_grp.o 00:02:33.506 CC lib/iscsi/iscsi.o 00:02:33.506 CC lib/iscsi/md5.o 00:02:33.506 CC lib/iscsi/param.o 00:02:33.506 CC lib/iscsi/iscsi_subsystem.o 00:02:33.506 CC lib/iscsi/tgt_node.o 00:02:33.506 CC lib/iscsi/portal_grp.o 00:02:33.506 CC lib/iscsi/task.o 00:02:33.506 CC lib/iscsi/iscsi_rpc.o 00:02:33.506 CC lib/vhost/vhost.o 00:02:33.506 CC lib/vhost/vhost_rpc.o 00:02:33.506 CC lib/vhost/vhost_scsi.o 00:02:33.506 CC lib/vhost/vhost_blk.o 00:02:33.506 CC lib/vhost/rte_vhost_user.o 00:02:33.764 SO libspdk_ftl.so.9.0 00:02:34.022 SYMLINK libspdk_ftl.so 00:02:34.022 LIB libspdk_nvmf.a 00:02:34.281 SO libspdk_nvmf.so.18.1 00:02:34.281 LIB libspdk_vhost.a 00:02:34.281 SYMLINK libspdk_nvmf.so 00:02:34.541 SO libspdk_vhost.so.8.0 00:02:34.541 SYMLINK libspdk_vhost.so 00:02:34.541 LIB libspdk_iscsi.a 00:02:34.541 SO libspdk_iscsi.so.8.0 00:02:34.801 SYMLINK libspdk_iscsi.so 00:02:35.371 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.371 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.371 CC module/accel/error/accel_error.o 00:02:35.371 CC module/accel/error/accel_error_rpc.o 00:02:35.371 LIB libspdk_env_dpdk_rpc.a 00:02:35.371 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.371 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.371 CC module/accel/ioat/accel_ioat.o 00:02:35.371 CC module/accel/iaa/accel_iaa.o 00:02:35.371 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.371 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.371 CC module/sock/posix/posix.o 00:02:35.631 CC module/keyring/file/keyring.o 00:02:35.631 CC module/keyring/file/keyring_rpc.o 00:02:35.631 CC module/accel/dsa/accel_dsa.o 00:02:35.631 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.631 CC module/blob/bdev/blob_bdev.o 00:02:35.631 CC module/keyring/linux/keyring.o 00:02:35.631 CC module/keyring/linux/keyring_rpc.o 00:02:35.631 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.631 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.631 LIB libspdk_scheduler_gscheduler.a 00:02:35.631 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.631 LIB libspdk_scheduler_dynamic.a 00:02:35.631 LIB libspdk_accel_error.a 00:02:35.631 LIB libspdk_keyring_file.a 00:02:35.631 LIB libspdk_keyring_linux.a 00:02:35.631 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.631 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:35.631 LIB libspdk_accel_ioat.a 00:02:35.631 LIB libspdk_accel_iaa.a 00:02:35.631 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.631 SO libspdk_accel_error.so.2.0 00:02:35.631 SO libspdk_keyring_file.so.1.0 00:02:35.631 SO libspdk_keyring_linux.so.1.0 00:02:35.631 SYMLINK libspdk_scheduler_gscheduler.so 00:02:35.631 SO libspdk_accel_ioat.so.6.0 00:02:35.631 SO libspdk_accel_iaa.so.3.0 00:02:35.631 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:35.631 LIB libspdk_accel_dsa.a 00:02:35.631 SYMLINK libspdk_accel_error.so 00:02:35.892 LIB libspdk_blob_bdev.a 00:02:35.892 SYMLINK libspdk_scheduler_dynamic.so 00:02:35.892 SYMLINK libspdk_keyring_file.so 00:02:35.892 SYMLINK libspdk_keyring_linux.so 00:02:35.892 SO libspdk_accel_dsa.so.5.0 00:02:35.892 SYMLINK libspdk_accel_ioat.so 00:02:35.892 SYMLINK libspdk_accel_iaa.so 00:02:35.892 SO libspdk_blob_bdev.so.11.0 00:02:35.892 SYMLINK libspdk_blob_bdev.so 00:02:35.892 SYMLINK libspdk_accel_dsa.so 00:02:36.152 LIB libspdk_sock_posix.a 00:02:36.152 SO libspdk_sock_posix.so.6.0 00:02:36.152 SYMLINK libspdk_sock_posix.so 00:02:36.411 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.411 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.411 CC module/bdev/gpt/gpt.o 00:02:36.411 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.411 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:36.411 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:36.411 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:36.411 CC module/bdev/aio/bdev_aio.o 00:02:36.411 CC module/bdev/malloc/bdev_malloc.o 00:02:36.411 CC module/bdev/aio/bdev_aio_rpc.o 00:02:36.411 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.411 CC module/bdev/null/bdev_null.o 00:02:36.411 CC module/bdev/null/bdev_null_rpc.o 00:02:36.411 CC module/bdev/delay/vbdev_delay.o 00:02:36.411 CC module/bdev/iscsi/bdev_iscsi.o 00:02:36.411 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:36.411 CC module/bdev/raid/bdev_raid.o 00:02:36.411 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:36.411 CC module/bdev/error/vbdev_error.o 00:02:36.411 CC module/bdev/raid/bdev_raid_rpc.o 00:02:36.411 CC module/bdev/raid/bdev_raid_sb.o 00:02:36.411 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.411 CC module/bdev/split/vbdev_split.o 00:02:36.411 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.411 CC module/bdev/raid/raid0.o 00:02:36.411 CC module/bdev/raid/raid1.o 00:02:36.411 CC module/bdev/nvme/bdev_nvme.o 00:02:36.411 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:36.411 CC module/bdev/split/vbdev_split_rpc.o 00:02:36.411 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.411 CC module/bdev/raid/concat.o 00:02:36.411 CC module/bdev/nvme/nvme_rpc.o 00:02:36.411 CC module/bdev/nvme/bdev_mdns_client.o 00:02:36.411 CC module/bdev/nvme/vbdev_opal.o 00:02:36.411 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.411 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:36.411 CC module/bdev/passthru/vbdev_passthru.o 00:02:36.411 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:36.411 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:36.411 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:36.411 CC module/bdev/ftl/bdev_ftl.o 00:02:36.411 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:36.670 LIB libspdk_blobfs_bdev.a 00:02:36.670 SO libspdk_blobfs_bdev.so.6.0 00:02:36.670 LIB libspdk_bdev_split.a 00:02:36.670 LIB libspdk_bdev_gpt.a 00:02:36.670 LIB libspdk_bdev_null.a 00:02:36.670 SO libspdk_bdev_gpt.so.6.0 00:02:36.670 LIB libspdk_bdev_error.a 00:02:36.670 SO libspdk_bdev_split.so.6.0 00:02:36.670 SYMLINK libspdk_blobfs_bdev.so 00:02:36.670 SO libspdk_bdev_null.so.6.0 00:02:36.670 LIB libspdk_bdev_aio.a 00:02:36.670 LIB libspdk_bdev_passthru.a 00:02:36.670 LIB libspdk_bdev_ftl.a 00:02:36.670 SO libspdk_bdev_error.so.6.0 00:02:36.670 LIB libspdk_bdev_malloc.a 00:02:36.670 LIB libspdk_bdev_zone_block.a 00:02:36.670 SYMLINK libspdk_bdev_split.so 00:02:36.670 SO libspdk_bdev_aio.so.6.0 00:02:36.670 SYMLINK libspdk_bdev_gpt.so 00:02:36.670 SO libspdk_bdev_passthru.so.6.0 00:02:36.670 SO libspdk_bdev_ftl.so.6.0 00:02:36.670 LIB libspdk_bdev_iscsi.a 00:02:36.670 SYMLINK libspdk_bdev_null.so 00:02:36.670 LIB libspdk_bdev_delay.a 00:02:36.670 SO libspdk_bdev_zone_block.so.6.0 00:02:36.670 SO libspdk_bdev_malloc.so.6.0 00:02:36.670 SYMLINK libspdk_bdev_error.so 00:02:36.670 SO libspdk_bdev_iscsi.so.6.0 00:02:36.928 SO libspdk_bdev_delay.so.6.0 00:02:36.928 SYMLINK libspdk_bdev_passthru.so 00:02:36.928 SYMLINK libspdk_bdev_malloc.so 00:02:36.928 SYMLINK libspdk_bdev_aio.so 00:02:36.928 LIB libspdk_bdev_virtio.a 00:02:36.928 SYMLINK libspdk_bdev_ftl.so 00:02:36.928 LIB libspdk_bdev_lvol.a 00:02:36.928 SYMLINK libspdk_bdev_zone_block.so 00:02:36.928 SYMLINK libspdk_bdev_iscsi.so 00:02:36.928 SYMLINK libspdk_bdev_delay.so 00:02:36.928 SO libspdk_bdev_virtio.so.6.0 00:02:36.928 SO libspdk_bdev_lvol.so.6.0 00:02:36.928 SYMLINK libspdk_bdev_virtio.so 00:02:36.928 SYMLINK libspdk_bdev_lvol.so 00:02:37.187 LIB libspdk_bdev_raid.a 00:02:37.187 SO libspdk_bdev_raid.so.6.0 00:02:37.187 SYMLINK libspdk_bdev_raid.so 00:02:38.124 LIB libspdk_bdev_nvme.a 00:02:38.124 SO libspdk_bdev_nvme.so.7.0 00:02:38.124 SYMLINK libspdk_bdev_nvme.so 00:02:38.690 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:38.690 CC module/event/subsystems/vmd/vmd.o 00:02:38.948 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:38.948 CC module/event/subsystems/iobuf/iobuf.o 00:02:38.948 CC module/event/subsystems/keyring/keyring.o 00:02:38.948 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:38.948 CC module/event/subsystems/sock/sock.o 00:02:38.948 CC module/event/subsystems/scheduler/scheduler.o 00:02:38.948 LIB libspdk_event_vmd.a 00:02:38.948 LIB libspdk_event_keyring.a 00:02:38.948 LIB libspdk_event_iobuf.a 00:02:38.948 LIB libspdk_event_vhost_blk.a 00:02:38.948 LIB libspdk_event_sock.a 00:02:38.948 SO libspdk_event_keyring.so.1.0 00:02:38.948 LIB libspdk_event_scheduler.a 00:02:38.948 SO libspdk_event_vmd.so.6.0 00:02:38.948 SO libspdk_event_vhost_blk.so.3.0 00:02:38.948 SO libspdk_event_iobuf.so.3.0 00:02:38.948 SO libspdk_event_scheduler.so.4.0 00:02:38.948 SO libspdk_event_sock.so.5.0 00:02:38.948 SYMLINK libspdk_event_keyring.so 00:02:39.207 SYMLINK libspdk_event_vmd.so 00:02:39.207 SYMLINK libspdk_event_scheduler.so 00:02:39.207 SYMLINK libspdk_event_sock.so 00:02:39.207 SYMLINK libspdk_event_vhost_blk.so 00:02:39.207 SYMLINK libspdk_event_iobuf.so 00:02:39.465 CC module/event/subsystems/accel/accel.o 00:02:39.465 LIB libspdk_event_accel.a 00:02:39.724 SO libspdk_event_accel.so.6.0 00:02:39.724 SYMLINK libspdk_event_accel.so 00:02:40.040 CC module/event/subsystems/bdev/bdev.o 00:02:40.297 LIB libspdk_event_bdev.a 00:02:40.297 SO libspdk_event_bdev.so.6.0 00:02:40.297 SYMLINK libspdk_event_bdev.so 00:02:40.555 CC module/event/subsystems/ublk/ublk.o 00:02:40.555 CC module/event/subsystems/nbd/nbd.o 00:02:40.555 CC module/event/subsystems/scsi/scsi.o 00:02:40.813 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.813 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.813 LIB libspdk_event_ublk.a 00:02:40.813 LIB libspdk_event_nbd.a 00:02:40.813 LIB libspdk_event_scsi.a 00:02:40.813 SO libspdk_event_nbd.so.6.0 00:02:40.813 SO libspdk_event_ublk.so.3.0 00:02:40.813 SO libspdk_event_scsi.so.6.0 00:02:40.813 LIB libspdk_event_nvmf.a 00:02:40.813 SYMLINK libspdk_event_ublk.so 00:02:40.813 SYMLINK libspdk_event_nbd.so 00:02:41.071 SYMLINK libspdk_event_scsi.so 00:02:41.071 SO libspdk_event_nvmf.so.6.0 00:02:41.071 SYMLINK libspdk_event_nvmf.so 00:02:41.330 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:41.330 CC module/event/subsystems/iscsi/iscsi.o 00:02:41.588 LIB libspdk_event_vhost_scsi.a 00:02:41.588 LIB libspdk_event_iscsi.a 00:02:41.588 SO libspdk_event_vhost_scsi.so.3.0 00:02:41.588 SO libspdk_event_iscsi.so.6.0 00:02:41.588 SYMLINK libspdk_event_vhost_scsi.so 00:02:41.588 SYMLINK libspdk_event_iscsi.so 00:02:41.848 SO libspdk.so.6.0 00:02:41.848 SYMLINK libspdk.so 00:02:42.106 CC test/rpc_client/rpc_client_test.o 00:02:42.106 TEST_HEADER include/spdk/accel.h 00:02:42.106 TEST_HEADER include/spdk/accel_module.h 00:02:42.106 CC app/spdk_nvme_perf/perf.o 00:02:42.106 TEST_HEADER include/spdk/assert.h 00:02:42.106 TEST_HEADER include/spdk/barrier.h 00:02:42.106 TEST_HEADER include/spdk/bdev.h 00:02:42.106 TEST_HEADER include/spdk/base64.h 00:02:42.106 TEST_HEADER include/spdk/bdev_module.h 00:02:42.106 TEST_HEADER include/spdk/bit_array.h 00:02:42.106 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.106 TEST_HEADER include/spdk/bit_pool.h 00:02:42.106 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.106 TEST_HEADER include/spdk/blobfs.h 00:02:42.106 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.106 TEST_HEADER include/spdk/blob.h 00:02:42.106 CC app/spdk_nvme_discover/discovery_aer.o 00:02:42.106 TEST_HEADER include/spdk/conf.h 00:02:42.106 TEST_HEADER include/spdk/cpuset.h 00:02:42.106 TEST_HEADER include/spdk/config.h 00:02:42.106 CC app/trace_record/trace_record.o 00:02:42.106 TEST_HEADER include/spdk/crc16.h 00:02:42.106 TEST_HEADER include/spdk/crc32.h 00:02:42.106 TEST_HEADER include/spdk/crc64.h 00:02:42.106 CC app/spdk_lspci/spdk_lspci.o 00:02:42.106 TEST_HEADER include/spdk/dma.h 00:02:42.106 TEST_HEADER include/spdk/endian.h 00:02:42.106 CXX app/trace/trace.o 00:02:42.106 TEST_HEADER include/spdk/dif.h 00:02:42.106 TEST_HEADER include/spdk/env.h 00:02:42.106 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.106 TEST_HEADER include/spdk/event.h 00:02:42.106 TEST_HEADER include/spdk/fd.h 00:02:42.106 CC app/spdk_nvme_identify/identify.o 00:02:42.106 TEST_HEADER include/spdk/fd_group.h 00:02:42.106 TEST_HEADER include/spdk/ftl.h 00:02:42.106 TEST_HEADER include/spdk/file.h 00:02:42.106 CC app/spdk_top/spdk_top.o 00:02:42.106 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.106 TEST_HEADER include/spdk/histogram_data.h 00:02:42.107 TEST_HEADER include/spdk/hexlify.h 00:02:42.107 TEST_HEADER include/spdk/idxd.h 00:02:42.107 TEST_HEADER include/spdk/idxd_spec.h 00:02:42.107 TEST_HEADER include/spdk/ioat.h 00:02:42.107 TEST_HEADER include/spdk/init.h 00:02:42.107 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.107 TEST_HEADER include/spdk/iscsi_spec.h 00:02:42.107 TEST_HEADER include/spdk/ioat_spec.h 00:02:42.107 TEST_HEADER include/spdk/json.h 00:02:42.107 TEST_HEADER include/spdk/jsonrpc.h 00:02:42.107 TEST_HEADER include/spdk/keyring.h 00:02:42.107 TEST_HEADER include/spdk/log.h 00:02:42.107 TEST_HEADER include/spdk/keyring_module.h 00:02:42.107 TEST_HEADER include/spdk/lvol.h 00:02:42.107 TEST_HEADER include/spdk/likely.h 00:02:42.107 TEST_HEADER include/spdk/memory.h 00:02:42.107 TEST_HEADER include/spdk/mmio.h 00:02:42.107 TEST_HEADER include/spdk/notify.h 00:02:42.107 TEST_HEADER include/spdk/nbd.h 00:02:42.107 TEST_HEADER include/spdk/nvme.h 00:02:42.107 TEST_HEADER include/spdk/nvme_intel.h 00:02:42.107 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:42.107 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:42.107 TEST_HEADER include/spdk/nvme_spec.h 00:02:42.107 TEST_HEADER include/spdk/nvme_zns.h 00:02:42.107 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:42.107 CC app/iscsi_tgt/iscsi_tgt.o 00:02:42.107 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:42.107 CC app/nvmf_tgt/nvmf_main.o 00:02:42.107 TEST_HEADER include/spdk/nvmf.h 00:02:42.107 TEST_HEADER include/spdk/nvmf_spec.h 00:02:42.107 TEST_HEADER include/spdk/nvmf_transport.h 00:02:42.107 TEST_HEADER include/spdk/opal.h 00:02:42.107 TEST_HEADER include/spdk/opal_spec.h 00:02:42.107 CC app/spdk_dd/spdk_dd.o 00:02:42.107 TEST_HEADER include/spdk/pipe.h 00:02:42.107 TEST_HEADER include/spdk/pci_ids.h 00:02:42.107 TEST_HEADER include/spdk/reduce.h 00:02:42.107 TEST_HEADER include/spdk/rpc.h 00:02:42.107 TEST_HEADER include/spdk/queue.h 00:02:42.107 TEST_HEADER include/spdk/scsi.h 00:02:42.107 TEST_HEADER include/spdk/scheduler.h 00:02:42.107 TEST_HEADER include/spdk/scsi_spec.h 00:02:42.107 TEST_HEADER include/spdk/stdinc.h 00:02:42.107 TEST_HEADER include/spdk/sock.h 00:02:42.107 TEST_HEADER include/spdk/string.h 00:02:42.370 TEST_HEADER include/spdk/thread.h 00:02:42.370 TEST_HEADER include/spdk/trace_parser.h 00:02:42.370 TEST_HEADER include/spdk/trace.h 00:02:42.370 TEST_HEADER include/spdk/tree.h 00:02:42.370 TEST_HEADER include/spdk/util.h 00:02:42.370 TEST_HEADER include/spdk/ublk.h 00:02:42.370 TEST_HEADER include/spdk/version.h 00:02:42.370 TEST_HEADER include/spdk/uuid.h 00:02:42.370 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:42.370 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:42.370 TEST_HEADER include/spdk/vmd.h 00:02:42.370 TEST_HEADER include/spdk/vhost.h 00:02:42.370 TEST_HEADER include/spdk/zipf.h 00:02:42.370 TEST_HEADER include/spdk/xor.h 00:02:42.370 CXX test/cpp_headers/accel_module.o 00:02:42.370 CXX test/cpp_headers/accel.o 00:02:42.370 CXX test/cpp_headers/barrier.o 00:02:42.370 CXX test/cpp_headers/assert.o 00:02:42.370 CXX test/cpp_headers/base64.o 00:02:42.370 CXX test/cpp_headers/bdev.o 00:02:42.370 CXX test/cpp_headers/bdev_module.o 00:02:42.370 CC app/spdk_tgt/spdk_tgt.o 00:02:42.370 CXX test/cpp_headers/bit_array.o 00:02:42.370 CXX test/cpp_headers/bdev_zone.o 00:02:42.370 CXX test/cpp_headers/blobfs_bdev.o 00:02:42.370 CXX test/cpp_headers/blobfs.o 00:02:42.370 CXX test/cpp_headers/bit_pool.o 00:02:42.370 CXX test/cpp_headers/blob.o 00:02:42.370 CXX test/cpp_headers/blob_bdev.o 00:02:42.370 CXX test/cpp_headers/conf.o 00:02:42.370 CXX test/cpp_headers/cpuset.o 00:02:42.370 CXX test/cpp_headers/config.o 00:02:42.370 CXX test/cpp_headers/crc64.o 00:02:42.370 CXX test/cpp_headers/crc16.o 00:02:42.370 CXX test/cpp_headers/dif.o 00:02:42.370 CXX test/cpp_headers/crc32.o 00:02:42.370 CXX test/cpp_headers/dma.o 00:02:42.370 CXX test/cpp_headers/endian.o 00:02:42.370 CXX test/cpp_headers/env_dpdk.o 00:02:42.370 CXX test/cpp_headers/event.o 00:02:42.370 CXX test/cpp_headers/fd_group.o 00:02:42.370 CXX test/cpp_headers/fd.o 00:02:42.370 CXX test/cpp_headers/env.o 00:02:42.370 CXX test/cpp_headers/file.o 00:02:42.370 CXX test/cpp_headers/ftl.o 00:02:42.370 CXX test/cpp_headers/gpt_spec.o 00:02:42.370 CXX test/cpp_headers/hexlify.o 00:02:42.370 CXX test/cpp_headers/histogram_data.o 00:02:42.370 CXX test/cpp_headers/idxd.o 00:02:42.370 CXX test/cpp_headers/ioat.o 00:02:42.370 CXX test/cpp_headers/idxd_spec.o 00:02:42.370 CXX test/cpp_headers/init.o 00:02:42.370 CXX test/cpp_headers/ioat_spec.o 00:02:42.370 CXX test/cpp_headers/iscsi_spec.o 00:02:42.370 CXX test/cpp_headers/jsonrpc.o 00:02:42.370 CXX test/cpp_headers/json.o 00:02:42.370 CXX test/cpp_headers/keyring.o 00:02:42.370 CXX test/cpp_headers/keyring_module.o 00:02:42.370 CXX test/cpp_headers/likely.o 00:02:42.370 CXX test/cpp_headers/log.o 00:02:42.370 CXX test/cpp_headers/lvol.o 00:02:42.370 CXX test/cpp_headers/memory.o 00:02:42.370 CXX test/cpp_headers/mmio.o 00:02:42.370 CXX test/cpp_headers/nbd.o 00:02:42.370 CXX test/cpp_headers/nvme.o 00:02:42.370 CXX test/cpp_headers/notify.o 00:02:42.370 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.370 CXX test/cpp_headers/nvme_intel.o 00:02:42.370 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.370 CXX test/cpp_headers/nvme_spec.o 00:02:42.370 CXX test/cpp_headers/nvme_zns.o 00:02:42.370 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.370 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.370 CXX test/cpp_headers/nvmf.o 00:02:42.370 CXX test/cpp_headers/nvmf_spec.o 00:02:42.370 CXX test/cpp_headers/nvmf_transport.o 00:02:42.370 CXX test/cpp_headers/opal_spec.o 00:02:42.370 CXX test/cpp_headers/opal.o 00:02:42.370 CXX test/cpp_headers/pci_ids.o 00:02:42.370 CXX test/cpp_headers/pipe.o 00:02:42.370 CXX test/cpp_headers/reduce.o 00:02:42.370 CXX test/cpp_headers/queue.o 00:02:42.370 CXX test/cpp_headers/rpc.o 00:02:42.370 CXX test/cpp_headers/scheduler.o 00:02:42.370 CXX test/cpp_headers/scsi.o 00:02:42.370 CXX test/cpp_headers/sock.o 00:02:42.370 CXX test/cpp_headers/scsi_spec.o 00:02:42.370 CXX test/cpp_headers/stdinc.o 00:02:42.370 CXX test/cpp_headers/string.o 00:02:42.370 CXX test/cpp_headers/thread.o 00:02:42.370 CXX test/cpp_headers/trace_parser.o 00:02:42.370 CXX test/cpp_headers/trace.o 00:02:42.370 CXX test/cpp_headers/tree.o 00:02:42.370 CXX test/cpp_headers/ublk.o 00:02:42.370 CXX test/cpp_headers/util.o 00:02:42.370 CXX test/cpp_headers/uuid.o 00:02:42.370 CXX test/cpp_headers/version.o 00:02:42.370 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.370 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:42.370 CC test/env/memory/memory_ut.o 00:02:42.370 CC test/env/vtophys/vtophys.o 00:02:42.370 CC test/env/pci/pci_ut.o 00:02:42.370 CC test/app/histogram_perf/histogram_perf.o 00:02:42.370 CC test/app/stub/stub.o 00:02:42.370 CC test/app/jsoncat/jsoncat.o 00:02:42.370 CXX test/cpp_headers/vfio_user_spec.o 00:02:42.370 CC test/thread/poller_perf/poller_perf.o 00:02:42.370 CC examples/ioat/perf/perf.o 00:02:42.370 CC examples/util/zipf/zipf.o 00:02:42.370 CC test/dma/test_dma/test_dma.o 00:02:42.370 CXX test/cpp_headers/vhost.o 00:02:42.653 CXX test/cpp_headers/vmd.o 00:02:42.653 CC test/app/bdev_svc/bdev_svc.o 00:02:42.653 CC examples/ioat/verify/verify.o 00:02:42.653 CC app/fio/nvme/fio_plugin.o 00:02:42.653 LINK spdk_lspci 00:02:42.653 CC app/fio/bdev/fio_plugin.o 00:02:42.653 LINK rpc_client_test 00:02:42.925 LINK nvmf_tgt 00:02:42.925 LINK spdk_nvme_discover 00:02:42.925 LINK iscsi_tgt 00:02:42.925 LINK interrupt_tgt 00:02:42.925 CC test/env/mem_callbacks/mem_callbacks.o 00:02:42.925 LINK spdk_trace_record 00:02:42.925 LINK jsoncat 00:02:42.925 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:42.925 LINK env_dpdk_post_init 00:02:42.925 LINK poller_perf 00:02:43.185 CXX test/cpp_headers/xor.o 00:02:43.185 CXX test/cpp_headers/zipf.o 00:02:43.185 LINK zipf 00:02:43.185 LINK vtophys 00:02:43.185 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.185 LINK histogram_perf 00:02:43.185 LINK spdk_tgt 00:02:43.185 LINK stub 00:02:43.185 LINK bdev_svc 00:02:43.185 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:43.185 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:43.185 LINK ioat_perf 00:02:43.185 LINK spdk_dd 00:02:43.185 LINK verify 00:02:43.185 LINK spdk_trace 00:02:43.442 LINK pci_ut 00:02:43.442 LINK test_dma 00:02:43.442 LINK nvme_fuzz 00:02:43.442 LINK spdk_bdev 00:02:43.442 LINK spdk_nvme 00:02:43.442 LINK vhost_fuzz 00:02:43.442 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.442 CC examples/sock/hello_world/hello_sock.o 00:02:43.442 LINK spdk_nvme_perf 00:02:43.442 LINK spdk_top 00:02:43.442 LINK spdk_nvme_identify 00:02:43.700 CC examples/idxd/perf/perf.o 00:02:43.700 CC examples/vmd/led/led.o 00:02:43.700 LINK mem_callbacks 00:02:43.700 CC examples/thread/thread/thread_ex.o 00:02:43.700 CC app/vhost/vhost.o 00:02:43.700 CC test/event/reactor/reactor.o 00:02:43.700 CC test/event/reactor_perf/reactor_perf.o 00:02:43.700 CC test/event/event_perf/event_perf.o 00:02:43.700 CC test/event/app_repeat/app_repeat.o 00:02:43.700 CC test/event/scheduler/scheduler.o 00:02:43.700 LINK lsvmd 00:02:43.700 LINK led 00:02:43.700 LINK reactor 00:02:43.700 LINK hello_sock 00:02:43.700 LINK reactor_perf 00:02:43.700 LINK event_perf 00:02:43.700 LINK vhost 00:02:43.959 LINK app_repeat 00:02:43.959 CC test/nvme/connect_stress/connect_stress.o 00:02:43.959 LINK memory_ut 00:02:43.959 CC test/nvme/reserve/reserve.o 00:02:43.959 LINK thread 00:02:43.959 LINK idxd_perf 00:02:43.959 CC test/nvme/overhead/overhead.o 00:02:43.959 CC test/nvme/simple_copy/simple_copy.o 00:02:43.959 CC test/nvme/fdp/fdp.o 00:02:43.959 CC test/nvme/compliance/nvme_compliance.o 00:02:43.959 CC test/nvme/fused_ordering/fused_ordering.o 00:02:43.959 CC test/nvme/e2edp/nvme_dp.o 00:02:43.959 CC test/blobfs/mkfs/mkfs.o 00:02:43.959 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:43.959 CC test/nvme/boot_partition/boot_partition.o 00:02:43.959 CC test/nvme/aer/aer.o 00:02:43.959 CC test/nvme/sgl/sgl.o 00:02:43.959 CC test/nvme/reset/reset.o 00:02:43.959 CC test/nvme/err_injection/err_injection.o 00:02:43.959 CC test/nvme/startup/startup.o 00:02:43.959 CC test/nvme/cuse/cuse.o 00:02:43.959 CC test/accel/dif/dif.o 00:02:43.959 LINK scheduler 00:02:43.959 CC test/lvol/esnap/esnap.o 00:02:43.959 LINK connect_stress 00:02:43.959 LINK reserve 00:02:43.959 LINK boot_partition 00:02:43.959 LINK fused_ordering 00:02:43.959 LINK startup 00:02:43.959 LINK doorbell_aers 00:02:43.959 LINK err_injection 00:02:43.959 LINK mkfs 00:02:44.217 LINK simple_copy 00:02:44.218 LINK sgl 00:02:44.218 LINK reset 00:02:44.218 LINK aer 00:02:44.218 LINK nvme_dp 00:02:44.218 LINK overhead 00:02:44.218 LINK fdp 00:02:44.218 LINK nvme_compliance 00:02:44.218 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:44.218 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:44.218 CC examples/nvme/abort/abort.o 00:02:44.218 CC examples/nvme/hello_world/hello_world.o 00:02:44.218 CC examples/nvme/reconnect/reconnect.o 00:02:44.218 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:44.218 CC examples/nvme/hotplug/hotplug.o 00:02:44.218 CC examples/nvme/arbitration/arbitration.o 00:02:44.218 LINK dif 00:02:44.218 CC examples/blob/cli/blobcli.o 00:02:44.476 CC examples/accel/perf/accel_perf.o 00:02:44.476 CC examples/blob/hello_world/hello_blob.o 00:02:44.476 LINK iscsi_fuzz 00:02:44.476 LINK pmr_persistence 00:02:44.476 LINK cmb_copy 00:02:44.476 LINK hello_world 00:02:44.476 LINK hotplug 00:02:44.476 LINK abort 00:02:44.476 LINK reconnect 00:02:44.476 LINK arbitration 00:02:44.476 LINK hello_blob 00:02:44.768 LINK nvme_manage 00:02:44.768 LINK accel_perf 00:02:44.768 LINK blobcli 00:02:44.768 CC test/bdev/bdevio/bdevio.o 00:02:44.768 LINK cuse 00:02:45.027 LINK bdevio 00:02:45.285 CC examples/bdev/hello_world/hello_bdev.o 00:02:45.285 CC examples/bdev/bdevperf/bdevperf.o 00:02:45.543 LINK hello_bdev 00:02:45.801 LINK bdevperf 00:02:46.368 CC examples/nvmf/nvmf/nvmf.o 00:02:46.627 LINK nvmf 00:02:47.195 LINK esnap 00:02:47.763 00:02:47.763 real 0m49.122s 00:02:47.763 user 6m18.098s 00:02:47.763 sys 3m59.508s 00:02:47.763 17:54:47 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:47.763 17:54:47 make -- common/autotest_common.sh@10 -- $ set +x 00:02:47.763 ************************************ 00:02:47.763 END TEST make 00:02:47.763 ************************************ 00:02:47.763 17:54:47 -- common/autotest_common.sh@1142 -- $ return 0 00:02:47.763 17:54:47 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:47.763 17:54:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:47.763 17:54:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:47.763 17:54:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.763 17:54:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:47.763 17:54:47 -- pm/common@44 -- $ pid=1345708 00:02:47.763 17:54:47 -- pm/common@50 -- $ kill -TERM 1345708 00:02:47.763 17:54:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.763 17:54:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:47.763 17:54:47 -- pm/common@44 -- $ pid=1345709 00:02:47.763 17:54:47 -- pm/common@50 -- $ kill -TERM 1345709 00:02:47.763 17:54:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.764 17:54:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:47.764 17:54:47 -- pm/common@44 -- $ pid=1345712 00:02:47.764 17:54:47 -- pm/common@50 -- $ kill -TERM 1345712 00:02:47.764 17:54:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:47.764 17:54:47 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:47.764 17:54:47 -- pm/common@44 -- $ pid=1345738 00:02:47.764 17:54:47 -- pm/common@50 -- $ sudo -E kill -TERM 1345738 00:02:47.764 17:54:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:47.764 17:54:48 -- nvmf/common.sh@7 -- # uname -s 00:02:47.764 17:54:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:47.764 17:54:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:47.764 17:54:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:47.764 17:54:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:47.764 17:54:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:47.764 17:54:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:47.764 17:54:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:47.764 17:54:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:47.764 17:54:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:47.764 17:54:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:47.764 17:54:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:47.764 17:54:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:47.764 17:54:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:47.764 17:54:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:47.764 17:54:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:47.764 17:54:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:47.764 17:54:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:47.764 17:54:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:47.764 17:54:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:47.764 17:54:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:47.764 17:54:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.764 17:54:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.764 17:54:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.764 17:54:48 -- paths/export.sh@5 -- # export PATH 00:02:47.764 17:54:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:47.764 17:54:48 -- nvmf/common.sh@47 -- # : 0 00:02:47.764 17:54:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:47.764 17:54:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:47.764 17:54:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:47.764 17:54:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:47.764 17:54:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:47.764 17:54:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:47.764 17:54:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:47.764 17:54:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:47.764 17:54:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:47.764 17:54:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:47.764 17:54:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:47.764 17:54:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:47.764 17:54:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:47.764 17:54:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:47.764 17:54:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:47.764 17:54:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:47.764 17:54:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:47.764 17:54:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:47.764 17:54:48 -- spdk/autotest.sh@48 -- # udevadm_pid=1406871 00:02:47.764 17:54:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:47.764 17:54:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:47.764 17:54:48 -- pm/common@17 -- # local monitor 00:02:47.764 17:54:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.023 17:54:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.023 17:54:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.023 17:54:48 -- pm/common@21 -- # date +%s 00:02:48.023 17:54:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.023 17:54:48 -- pm/common@21 -- # date +%s 00:02:48.023 17:54:48 -- pm/common@25 -- # sleep 1 00:02:48.023 17:54:48 -- pm/common@21 -- # date +%s 00:02:48.023 17:54:48 -- pm/common@21 -- # date +%s 00:02:48.023 17:54:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721058888 00:02:48.023 17:54:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721058888 00:02:48.023 17:54:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721058888 00:02:48.023 17:54:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721058888 00:02:48.023 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721058888_collect-vmstat.pm.log 00:02:48.023 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721058888_collect-cpu-load.pm.log 00:02:48.023 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721058888_collect-cpu-temp.pm.log 00:02:48.023 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721058888_collect-bmc-pm.bmc.pm.log 00:02:48.960 17:54:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:48.960 17:54:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:48.960 17:54:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:48.960 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:02:48.960 17:54:49 -- spdk/autotest.sh@59 -- # create_test_list 00:02:48.960 17:54:49 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:48.960 17:54:49 -- common/autotest_common.sh@10 -- # set +x 00:02:48.960 17:54:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:48.960 17:54:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.960 17:54:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.960 17:54:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:48.960 17:54:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:48.960 17:54:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.960 17:54:49 -- common/autotest_common.sh@1455 -- # uname 00:02:48.960 17:54:49 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:48.960 17:54:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.960 17:54:49 -- common/autotest_common.sh@1475 -- # uname 00:02:48.960 17:54:49 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:48.960 17:54:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:48.960 17:54:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:48.960 17:54:49 -- spdk/autotest.sh@72 -- # hash lcov 00:02:48.960 17:54:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:48.960 17:54:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:48.960 --rc lcov_branch_coverage=1 00:02:48.960 --rc lcov_function_coverage=1 00:02:48.960 --rc genhtml_branch_coverage=1 00:02:48.960 --rc genhtml_function_coverage=1 00:02:48.960 --rc genhtml_legend=1 00:02:48.960 --rc geninfo_all_blocks=1 00:02:48.960 ' 00:02:48.960 17:54:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:48.960 --rc lcov_branch_coverage=1 00:02:48.960 --rc lcov_function_coverage=1 00:02:48.960 --rc genhtml_branch_coverage=1 00:02:48.960 --rc genhtml_function_coverage=1 00:02:48.960 --rc genhtml_legend=1 00:02:48.960 --rc geninfo_all_blocks=1 00:02:48.960 ' 00:02:48.960 17:54:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:48.960 --rc lcov_branch_coverage=1 00:02:48.960 --rc lcov_function_coverage=1 00:02:48.960 --rc genhtml_branch_coverage=1 00:02:48.960 --rc genhtml_function_coverage=1 00:02:48.960 --rc genhtml_legend=1 00:02:48.960 --rc geninfo_all_blocks=1 00:02:48.960 --no-external' 00:02:48.960 17:54:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:48.960 --rc lcov_branch_coverage=1 00:02:48.960 --rc lcov_function_coverage=1 00:02:48.960 --rc genhtml_branch_coverage=1 00:02:48.960 --rc genhtml_function_coverage=1 00:02:48.960 --rc genhtml_legend=1 00:02:48.960 --rc geninfo_all_blocks=1 00:02:48.960 --no-external' 00:02:48.960 17:54:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:48.960 lcov: LCOV version 1.14 00:02:48.960 17:54:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:50.339 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:50.339 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:50.599 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:50.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:50.859 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:50.859 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:50.860 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:50.860 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:51.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:51.119 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:03.329 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:03.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:13.302 17:55:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:13.302 17:55:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:13.302 17:55:13 -- common/autotest_common.sh@10 -- # set +x 00:03:13.302 17:55:13 -- spdk/autotest.sh@91 -- # rm -f 00:03:13.302 17:55:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.532 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:17.532 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:17.532 17:55:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:17.532 17:55:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:17.532 17:55:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:17.532 17:55:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:17.532 17:55:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:17.532 17:55:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:17.532 17:55:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:17.532 17:55:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.532 17:55:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:17.532 17:55:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:17.532 17:55:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:17.532 17:55:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:17.532 17:55:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:17.532 17:55:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:17.532 17:55:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:17.532 No valid GPT data, bailing 00:03:17.532 17:55:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.532 17:55:17 -- scripts/common.sh@391 -- # pt= 00:03:17.532 17:55:17 -- scripts/common.sh@392 -- # return 1 00:03:17.532 17:55:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:17.532 1+0 records in 00:03:17.532 1+0 records out 00:03:17.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00208427 s, 503 MB/s 00:03:17.532 17:55:17 -- spdk/autotest.sh@118 -- # sync 00:03:17.532 17:55:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:17.532 17:55:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:17.532 17:55:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.651 17:55:25 -- spdk/autotest.sh@124 -- # uname -s 00:03:25.651 17:55:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:25.651 17:55:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.651 17:55:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.651 17:55:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.651 17:55:25 -- common/autotest_common.sh@10 -- # set +x 00:03:25.651 ************************************ 00:03:25.651 START TEST setup.sh 00:03:25.651 ************************************ 00:03:25.651 17:55:25 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.651 * Looking for test storage... 00:03:25.651 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:25.651 17:55:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:25.651 17:55:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.651 17:55:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:25.651 17:55:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.651 17:55:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.651 17:55:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.651 ************************************ 00:03:25.651 START TEST acl 00:03:25.651 ************************************ 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:25.651 * Looking for test storage... 00:03:25.651 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:25.651 17:55:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.651 17:55:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.651 17:55:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:25.651 17:55:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:25.651 17:55:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:25.651 17:55:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.651 17:55:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:25.651 17:55:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.651 17:55:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.848 17:55:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:29.848 17:55:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:29.848 17:55:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.848 17:55:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:29.848 17:55:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.848 17:55:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:33.139 Hugepages 00:03:33.139 node hugesize free / total 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 00:03:33.139 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:33.139 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:33.140 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:33.398 17:55:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:33.398 17:55:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.398 17:55:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.398 17:55:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.398 ************************************ 00:03:33.398 START TEST denied 00:03:33.398 ************************************ 00:03:33.398 17:55:33 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:33.398 17:55:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:33.398 17:55:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:33.398 17:55:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:33.398 17:55:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.398 17:55:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:37.586 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.586 17:55:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.858 00:03:42.858 real 0m8.547s 00:03:42.858 user 0m2.530s 00:03:42.858 sys 0m5.191s 00:03:42.858 17:55:42 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.858 17:55:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:42.858 ************************************ 00:03:42.858 END TEST denied 00:03:42.858 ************************************ 00:03:42.858 17:55:42 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:42.858 17:55:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:42.858 17:55:42 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.858 17:55:42 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.858 17:55:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.858 ************************************ 00:03:42.858 START TEST allowed 00:03:42.858 ************************************ 00:03:42.858 17:55:42 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:42.858 17:55:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:42.858 17:55:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:42.858 17:55:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:42.858 17:55:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.858 17:55:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:48.136 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.136 17:55:48 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:48.136 17:55:48 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:48.136 17:55:48 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:48.136 17:55:48 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.136 17:55:48 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.410 00:03:52.410 real 0m10.366s 00:03:52.410 user 0m2.832s 00:03:52.410 sys 0m5.710s 00:03:52.410 17:55:52 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.410 17:55:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:52.410 ************************************ 00:03:52.410 END TEST allowed 00:03:52.410 ************************************ 00:03:52.410 17:55:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:52.410 00:03:52.410 real 0m27.467s 00:03:52.410 user 0m8.450s 00:03:52.410 sys 0m16.667s 00:03:52.410 17:55:52 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.410 17:55:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.410 ************************************ 00:03:52.410 END TEST acl 00:03:52.410 ************************************ 00:03:52.410 17:55:52 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.410 17:55:52 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:52.410 17:55:52 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.410 17:55:52 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.410 17:55:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.670 ************************************ 00:03:52.670 START TEST hugepages 00:03:52.670 ************************************ 00:03:52.671 17:55:52 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:52.671 * Looking for test storage... 00:03:52.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 37617180 kB' 'MemAvailable: 41168808 kB' 'Buffers: 4096 kB' 'Cached: 14404732 kB' 'SwapCached: 0 kB' 'Active: 11457660 kB' 'Inactive: 3471256 kB' 'Active(anon): 11015412 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523984 kB' 'Mapped: 214340 kB' 'Shmem: 10495324 kB' 'KReclaimable: 270524 kB' 'Slab: 901356 kB' 'SReclaimable: 270524 kB' 'SUnreclaim: 630832 kB' 'KernelStack: 22432 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439044 kB' 'Committed_AS: 12398276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218808 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.671 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:52.672 17:55:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:52.672 17:55:52 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.672 17:55:52 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.672 17:55:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.672 ************************************ 00:03:52.672 START TEST default_setup 00:03:52.673 ************************************ 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.673 17:55:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:56.865 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.865 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.773 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39775908 kB' 'MemAvailable: 43327344 kB' 'Buffers: 4096 kB' 'Cached: 14404868 kB' 'SwapCached: 0 kB' 'Active: 11475544 kB' 'Inactive: 3471256 kB' 'Active(anon): 11033296 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540928 kB' 'Mapped: 214536 kB' 'Shmem: 10495460 kB' 'KReclaimable: 270140 kB' 'Slab: 898036 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 627896 kB' 'KernelStack: 22528 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12413356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218776 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.773 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39777736 kB' 'MemAvailable: 43329172 kB' 'Buffers: 4096 kB' 'Cached: 14404868 kB' 'SwapCached: 0 kB' 'Active: 11476212 kB' 'Inactive: 3471256 kB' 'Active(anon): 11033964 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541520 kB' 'Mapped: 214536 kB' 'Shmem: 10495460 kB' 'KReclaimable: 270140 kB' 'Slab: 898036 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 627896 kB' 'KernelStack: 22624 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12413372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218824 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.774 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39777556 kB' 'MemAvailable: 43328992 kB' 'Buffers: 4096 kB' 'Cached: 14404888 kB' 'SwapCached: 0 kB' 'Active: 11474452 kB' 'Inactive: 3471256 kB' 'Active(anon): 11032204 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540116 kB' 'Mapped: 214424 kB' 'Shmem: 10495480 kB' 'KReclaimable: 270140 kB' 'Slab: 898064 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 627924 kB' 'KernelStack: 22528 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12413396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.775 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.776 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.777 nr_hugepages=1024 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.777 resv_hugepages=0 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.777 surplus_hugepages=0 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.777 anon_hugepages=0 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39777276 kB' 'MemAvailable: 43328712 kB' 'Buffers: 4096 kB' 'Cached: 14404908 kB' 'SwapCached: 0 kB' 'Active: 11474972 kB' 'Inactive: 3471256 kB' 'Active(anon): 11032724 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540616 kB' 'Mapped: 214424 kB' 'Shmem: 10495500 kB' 'KReclaimable: 270140 kB' 'Slab: 898064 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 627924 kB' 'KernelStack: 22576 kB' 'PageTables: 9344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12413416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.777 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.778 17:55:58 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23322724 kB' 'MemUsed: 9316416 kB' 'SwapCached: 0 kB' 'Active: 6143948 kB' 'Inactive: 85080 kB' 'Active(anon): 5929076 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5971904 kB' 'Mapped: 88060 kB' 'AnonPages: 260452 kB' 'Shmem: 5671952 kB' 'KernelStack: 11880 kB' 'PageTables: 5356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 401420 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 266844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.778 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.779 node0=1024 expecting 1024 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.779 00:03:58.779 real 0m6.045s 00:03:58.779 user 0m1.532s 00:03:58.779 sys 0m2.715s 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.779 17:55:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:58.779 ************************************ 00:03:58.779 END TEST default_setup 00:03:58.779 ************************************ 00:03:58.779 17:55:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.779 17:55:59 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:58.779 17:55:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.779 17:55:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.779 17:55:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.779 ************************************ 00:03:58.779 START TEST per_node_1G_alloc 00:03:58.779 ************************************ 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.779 17:55:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:02.984 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:02.984 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39810196 kB' 'MemAvailable: 43361632 kB' 'Buffers: 4096 kB' 'Cached: 14405016 kB' 'SwapCached: 0 kB' 'Active: 11475608 kB' 'Inactive: 3471256 kB' 'Active(anon): 11033360 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541484 kB' 'Mapped: 213892 kB' 'Shmem: 10495608 kB' 'KReclaimable: 270140 kB' 'Slab: 898532 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628392 kB' 'KernelStack: 22304 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12410116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218940 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.984 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39816748 kB' 'MemAvailable: 43368184 kB' 'Buffers: 4096 kB' 'Cached: 14405016 kB' 'SwapCached: 0 kB' 'Active: 11470360 kB' 'Inactive: 3471256 kB' 'Active(anon): 11028112 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535696 kB' 'Mapped: 213836 kB' 'Shmem: 10495608 kB' 'KReclaimable: 270140 kB' 'Slab: 898524 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628384 kB' 'KernelStack: 22320 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12404012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218920 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.985 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39815388 kB' 'MemAvailable: 43366824 kB' 'Buffers: 4096 kB' 'Cached: 14405016 kB' 'SwapCached: 0 kB' 'Active: 11472400 kB' 'Inactive: 3471256 kB' 'Active(anon): 11030152 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537876 kB' 'Mapped: 213328 kB' 'Shmem: 10495608 kB' 'KReclaimable: 270140 kB' 'Slab: 898584 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628444 kB' 'KernelStack: 22384 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12419140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:02.986 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.987 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.988 nr_hugepages=1024 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.988 resv_hugepages=0 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.988 surplus_hugepages=0 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.988 anon_hugepages=0 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39816544 kB' 'MemAvailable: 43367980 kB' 'Buffers: 4096 kB' 'Cached: 14405056 kB' 'SwapCached: 0 kB' 'Active: 11470680 kB' 'Inactive: 3471256 kB' 'Active(anon): 11028432 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536080 kB' 'Mapped: 213328 kB' 'Shmem: 10495648 kB' 'KReclaimable: 270140 kB' 'Slab: 898568 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628428 kB' 'KernelStack: 22288 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12403696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.988 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24422576 kB' 'MemUsed: 8216564 kB' 'SwapCached: 0 kB' 'Active: 6141572 kB' 'Inactive: 85080 kB' 'Active(anon): 5926700 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5972048 kB' 'Mapped: 87156 kB' 'AnonPages: 257828 kB' 'Shmem: 5672096 kB' 'KernelStack: 11768 kB' 'PageTables: 5124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 401820 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 267244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.989 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.990 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 15394300 kB' 'MemUsed: 12261744 kB' 'SwapCached: 0 kB' 'Active: 5329068 kB' 'Inactive: 3386176 kB' 'Active(anon): 5101692 kB' 'Inactive(anon): 0 kB' 'Active(file): 227376 kB' 'Inactive(file): 3386176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437132 kB' 'Mapped: 126172 kB' 'AnonPages: 278156 kB' 'Shmem: 4823580 kB' 'KernelStack: 10552 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135564 kB' 'Slab: 496748 kB' 'SReclaimable: 135564 kB' 'SUnreclaim: 361184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.992 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.993 node0=512 expecting 512 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:02.993 node1=512 expecting 512 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.993 00:04:02.993 real 0m4.075s 00:04:02.993 user 0m1.461s 00:04:02.993 sys 0m2.661s 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.993 17:56:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.993 ************************************ 00:04:02.993 END TEST per_node_1G_alloc 00:04:02.993 ************************************ 00:04:02.993 17:56:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.993 17:56:03 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:02.993 17:56:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.993 17:56:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.993 17:56:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.993 ************************************ 00:04:02.993 START TEST even_2G_alloc 00:04:02.993 ************************************ 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.993 17:56:03 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:07.190 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.190 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.190 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.190 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.190 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.190 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.191 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39813588 kB' 'MemAvailable: 43365024 kB' 'Buffers: 4096 kB' 'Cached: 14405200 kB' 'SwapCached: 0 kB' 'Active: 11473028 kB' 'Inactive: 3471256 kB' 'Active(anon): 11030780 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538164 kB' 'Mapped: 213380 kB' 'Shmem: 10495792 kB' 'KReclaimable: 270140 kB' 'Slab: 898864 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628724 kB' 'KernelStack: 22592 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12410264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.191 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39816320 kB' 'MemAvailable: 43367756 kB' 'Buffers: 4096 kB' 'Cached: 14405200 kB' 'SwapCached: 0 kB' 'Active: 11472476 kB' 'Inactive: 3471256 kB' 'Active(anon): 11030228 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537612 kB' 'Mapped: 213388 kB' 'Shmem: 10495792 kB' 'KReclaimable: 270140 kB' 'Slab: 898840 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628700 kB' 'KernelStack: 22544 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12406252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.192 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.193 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39813648 kB' 'MemAvailable: 43365084 kB' 'Buffers: 4096 kB' 'Cached: 14405220 kB' 'SwapCached: 0 kB' 'Active: 11472280 kB' 'Inactive: 3471256 kB' 'Active(anon): 11030032 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537396 kB' 'Mapped: 213384 kB' 'Shmem: 10495812 kB' 'KReclaimable: 270140 kB' 'Slab: 898840 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628700 kB' 'KernelStack: 22560 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12407880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.194 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.195 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.196 nr_hugepages=1024 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.196 resv_hugepages=0 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.196 surplus_hugepages=0 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.196 anon_hugepages=0 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.196 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39815772 kB' 'MemAvailable: 43367208 kB' 'Buffers: 4096 kB' 'Cached: 14405240 kB' 'SwapCached: 0 kB' 'Active: 11471328 kB' 'Inactive: 3471256 kB' 'Active(anon): 11029080 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536412 kB' 'Mapped: 213384 kB' 'Shmem: 10495832 kB' 'KReclaimable: 270140 kB' 'Slab: 898944 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628804 kB' 'KernelStack: 22240 kB' 'PageTables: 8556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12405060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.197 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24407076 kB' 'MemUsed: 8232064 kB' 'SwapCached: 0 kB' 'Active: 6139804 kB' 'Inactive: 85080 kB' 'Active(anon): 5924932 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5972120 kB' 'Mapped: 87160 kB' 'AnonPages: 255896 kB' 'Shmem: 5672168 kB' 'KernelStack: 11736 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 402164 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 267588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.198 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.199 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 15409696 kB' 'MemUsed: 12246348 kB' 'SwapCached: 0 kB' 'Active: 5331664 kB' 'Inactive: 3386176 kB' 'Active(anon): 5104288 kB' 'Inactive(anon): 0 kB' 'Active(file): 227376 kB' 'Inactive(file): 3386176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437260 kB' 'Mapped: 126188 kB' 'AnonPages: 280648 kB' 'Shmem: 4823708 kB' 'KernelStack: 10600 kB' 'PageTables: 3548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135564 kB' 'Slab: 496812 kB' 'SReclaimable: 135564 kB' 'SUnreclaim: 361248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.200 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.201 node0=512 expecting 512 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:07.201 node1=512 expecting 512 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.201 00:04:07.201 real 0m3.892s 00:04:07.201 user 0m1.436s 00:04:07.201 sys 0m2.486s 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.201 17:56:07 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.201 ************************************ 00:04:07.201 END TEST even_2G_alloc 00:04:07.201 ************************************ 00:04:07.201 17:56:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:07.201 17:56:07 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:07.201 17:56:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.201 17:56:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.201 17:56:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.201 ************************************ 00:04:07.201 START TEST odd_alloc 00:04:07.201 ************************************ 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.201 17:56:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:11.403 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.403 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39787804 kB' 'MemAvailable: 43339240 kB' 'Buffers: 4096 kB' 'Cached: 14405376 kB' 'SwapCached: 0 kB' 'Active: 11473464 kB' 'Inactive: 3471256 kB' 'Active(anon): 11031216 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537948 kB' 'Mapped: 213484 kB' 'Shmem: 10495968 kB' 'KReclaimable: 270140 kB' 'Slab: 900144 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 630004 kB' 'KernelStack: 22528 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 12407320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219080 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.403 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.404 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39787936 kB' 'MemAvailable: 43339372 kB' 'Buffers: 4096 kB' 'Cached: 14405376 kB' 'SwapCached: 0 kB' 'Active: 11473508 kB' 'Inactive: 3471256 kB' 'Active(anon): 11031260 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538012 kB' 'Mapped: 213464 kB' 'Shmem: 10495968 kB' 'KReclaimable: 270140 kB' 'Slab: 900004 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629864 kB' 'KernelStack: 22320 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 12407460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219000 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.405 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.406 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.407 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39789352 kB' 'MemAvailable: 43340788 kB' 'Buffers: 4096 kB' 'Cached: 14405376 kB' 'SwapCached: 0 kB' 'Active: 11472744 kB' 'Inactive: 3471256 kB' 'Active(anon): 11030496 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537748 kB' 'Mapped: 213388 kB' 'Shmem: 10495968 kB' 'KReclaimable: 270140 kB' 'Slab: 900024 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629884 kB' 'KernelStack: 22208 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 12406236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218936 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.408 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.409 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:11.410 nr_hugepages=1025 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.410 resv_hugepages=0 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.410 surplus_hugepages=0 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.410 anon_hugepages=0 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.410 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39790040 kB' 'MemAvailable: 43341476 kB' 'Buffers: 4096 kB' 'Cached: 14405416 kB' 'SwapCached: 0 kB' 'Active: 11472636 kB' 'Inactive: 3471256 kB' 'Active(anon): 11030388 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537624 kB' 'Mapped: 213352 kB' 'Shmem: 10496008 kB' 'KReclaimable: 270140 kB' 'Slab: 900048 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629908 kB' 'KernelStack: 22320 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486596 kB' 'Committed_AS: 12406256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218936 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.411 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.412 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24412948 kB' 'MemUsed: 8226192 kB' 'SwapCached: 0 kB' 'Active: 6140556 kB' 'Inactive: 85080 kB' 'Active(anon): 5925684 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5972144 kB' 'Mapped: 87156 kB' 'AnonPages: 256676 kB' 'Shmem: 5672192 kB' 'KernelStack: 11752 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 402924 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 268348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.413 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.414 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 15376664 kB' 'MemUsed: 12279380 kB' 'SwapCached: 0 kB' 'Active: 5332568 kB' 'Inactive: 3386176 kB' 'Active(anon): 5105192 kB' 'Inactive(anon): 0 kB' 'Active(file): 227376 kB' 'Inactive(file): 3386176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437404 kB' 'Mapped: 126196 kB' 'AnonPages: 281392 kB' 'Shmem: 4823852 kB' 'KernelStack: 10552 kB' 'PageTables: 3364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135564 kB' 'Slab: 497124 kB' 'SReclaimable: 135564 kB' 'SUnreclaim: 361560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.415 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.416 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:11.417 node0=512 expecting 513 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:11.417 node1=513 expecting 512 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:11.417 00:04:11.417 real 0m4.398s 00:04:11.417 user 0m1.658s 00:04:11.417 sys 0m2.817s 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.417 17:56:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:11.417 ************************************ 00:04:11.417 END TEST odd_alloc 00:04:11.417 ************************************ 00:04:11.417 17:56:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:11.417 17:56:11 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:11.417 17:56:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.417 17:56:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.417 17:56:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:11.417 ************************************ 00:04:11.417 START TEST custom_alloc 00:04:11.417 ************************************ 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.417 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.418 17:56:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:15.651 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.651 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 38786080 kB' 'MemAvailable: 42337516 kB' 'Buffers: 4096 kB' 'Cached: 14405536 kB' 'SwapCached: 0 kB' 'Active: 11473848 kB' 'Inactive: 3471256 kB' 'Active(anon): 11031600 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538696 kB' 'Mapped: 213500 kB' 'Shmem: 10496128 kB' 'KReclaimable: 270140 kB' 'Slab: 899404 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629264 kB' 'KernelStack: 22240 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 12408120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.651 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.652 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 38787716 kB' 'MemAvailable: 42339152 kB' 'Buffers: 4096 kB' 'Cached: 14405540 kB' 'SwapCached: 0 kB' 'Active: 11473704 kB' 'Inactive: 3471256 kB' 'Active(anon): 11031456 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538604 kB' 'Mapped: 213400 kB' 'Shmem: 10496132 kB' 'KReclaimable: 270140 kB' 'Slab: 899380 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629240 kB' 'KernelStack: 22400 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 12409380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218952 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.653 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.654 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 38789952 kB' 'MemAvailable: 42341388 kB' 'Buffers: 4096 kB' 'Cached: 14405540 kB' 'SwapCached: 0 kB' 'Active: 11474344 kB' 'Inactive: 3471256 kB' 'Active(anon): 11032096 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539216 kB' 'Mapped: 213904 kB' 'Shmem: 10496132 kB' 'KReclaimable: 270140 kB' 'Slab: 899380 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629240 kB' 'KernelStack: 22400 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 12411652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.655 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:15.656 nr_hugepages=1536 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.656 resv_hugepages=0 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.656 surplus_hugepages=0 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.656 anon_hugepages=0 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:15.656 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 38785124 kB' 'MemAvailable: 42336560 kB' 'Buffers: 4096 kB' 'Cached: 14405580 kB' 'SwapCached: 0 kB' 'Active: 11479444 kB' 'Inactive: 3471256 kB' 'Active(anon): 11037196 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544284 kB' 'Mapped: 214264 kB' 'Shmem: 10496172 kB' 'KReclaimable: 270140 kB' 'Slab: 899412 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629272 kB' 'KernelStack: 22512 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963332 kB' 'Committed_AS: 12415912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218972 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.657 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 24436264 kB' 'MemUsed: 8202876 kB' 'SwapCached: 0 kB' 'Active: 6140796 kB' 'Inactive: 85080 kB' 'Active(anon): 5925924 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5972164 kB' 'Mapped: 87156 kB' 'AnonPages: 256840 kB' 'Shmem: 5672212 kB' 'KernelStack: 11864 kB' 'PageTables: 5100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 402344 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 267768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.658 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.659 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656044 kB' 'MemFree: 14346892 kB' 'MemUsed: 13309152 kB' 'SwapCached: 0 kB' 'Active: 5337840 kB' 'Inactive: 3386176 kB' 'Active(anon): 5110464 kB' 'Inactive(anon): 0 kB' 'Active(file): 227376 kB' 'Inactive(file): 3386176 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8437556 kB' 'Mapped: 126748 kB' 'AnonPages: 287084 kB' 'Shmem: 4824004 kB' 'KernelStack: 10584 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 135564 kB' 'Slab: 497036 kB' 'SReclaimable: 135564 kB' 'SUnreclaim: 361472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.660 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:15.661 node0=512 expecting 512 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:15.661 node1=1024 expecting 1024 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:15.661 00:04:15.661 real 0m3.997s 00:04:15.661 user 0m1.478s 00:04:15.661 sys 0m2.474s 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.661 17:56:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.661 ************************************ 00:04:15.661 END TEST custom_alloc 00:04:15.661 ************************************ 00:04:15.661 17:56:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:15.661 17:56:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:15.661 17:56:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.661 17:56:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.661 17:56:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.661 ************************************ 00:04:15.661 START TEST no_shrink_alloc 00:04:15.661 ************************************ 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.661 17:56:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:19.866 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.866 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39770876 kB' 'MemAvailable: 43322312 kB' 'Buffers: 4096 kB' 'Cached: 14405712 kB' 'SwapCached: 0 kB' 'Active: 11474688 kB' 'Inactive: 3471256 kB' 'Active(anon): 11032440 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538964 kB' 'Mapped: 213460 kB' 'Shmem: 10496304 kB' 'KReclaimable: 270140 kB' 'Slab: 899424 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629284 kB' 'KernelStack: 22352 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12407688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218904 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.866 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.867 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39771208 kB' 'MemAvailable: 43322644 kB' 'Buffers: 4096 kB' 'Cached: 14405712 kB' 'SwapCached: 0 kB' 'Active: 11475168 kB' 'Inactive: 3471256 kB' 'Active(anon): 11032920 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539460 kB' 'Mapped: 213460 kB' 'Shmem: 10496304 kB' 'KReclaimable: 270140 kB' 'Slab: 899424 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629284 kB' 'KernelStack: 22368 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12407708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.868 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39771824 kB' 'MemAvailable: 43323260 kB' 'Buffers: 4096 kB' 'Cached: 14405712 kB' 'SwapCached: 0 kB' 'Active: 11474048 kB' 'Inactive: 3471256 kB' 'Active(anon): 11031800 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538764 kB' 'Mapped: 213380 kB' 'Shmem: 10496304 kB' 'KReclaimable: 270140 kB' 'Slab: 899392 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629252 kB' 'KernelStack: 22368 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12407728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.869 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.870 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.871 nr_hugepages=1024 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.871 resv_hugepages=0 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.871 surplus_hugepages=0 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.871 anon_hugepages=0 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39772752 kB' 'MemAvailable: 43324188 kB' 'Buffers: 4096 kB' 'Cached: 14405756 kB' 'SwapCached: 0 kB' 'Active: 11474284 kB' 'Inactive: 3471256 kB' 'Active(anon): 11032036 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538948 kB' 'Mapped: 213380 kB' 'Shmem: 10496348 kB' 'KReclaimable: 270140 kB' 'Slab: 899392 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 629252 kB' 'KernelStack: 22352 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12407752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218872 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.871 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.872 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23378276 kB' 'MemUsed: 9260864 kB' 'SwapCached: 0 kB' 'Active: 6141248 kB' 'Inactive: 85080 kB' 'Active(anon): 5926376 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5972248 kB' 'Mapped: 87160 kB' 'AnonPages: 257304 kB' 'Shmem: 5672296 kB' 'KernelStack: 11784 kB' 'PageTables: 5176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 402536 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 267960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.873 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.874 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:19.875 node0=1024 expecting 1024 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.875 17:56:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:22.408 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.672 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.672 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39768444 kB' 'MemAvailable: 43319880 kB' 'Buffers: 4096 kB' 'Cached: 14405852 kB' 'SwapCached: 0 kB' 'Active: 11477176 kB' 'Inactive: 3471256 kB' 'Active(anon): 11034928 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541332 kB' 'Mapped: 213460 kB' 'Shmem: 10496444 kB' 'KReclaimable: 270140 kB' 'Slab: 898288 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628148 kB' 'KernelStack: 22352 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12408704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218776 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.672 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39769136 kB' 'MemAvailable: 43320572 kB' 'Buffers: 4096 kB' 'Cached: 14405856 kB' 'SwapCached: 0 kB' 'Active: 11476776 kB' 'Inactive: 3471256 kB' 'Active(anon): 11034528 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541420 kB' 'Mapped: 213380 kB' 'Shmem: 10496448 kB' 'KReclaimable: 270140 kB' 'Slab: 898248 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628108 kB' 'KernelStack: 22336 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12408724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218776 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.673 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.674 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39768884 kB' 'MemAvailable: 43320320 kB' 'Buffers: 4096 kB' 'Cached: 14405872 kB' 'SwapCached: 0 kB' 'Active: 11477116 kB' 'Inactive: 3471256 kB' 'Active(anon): 11034868 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541708 kB' 'Mapped: 213380 kB' 'Shmem: 10496464 kB' 'KReclaimable: 270140 kB' 'Slab: 898248 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628108 kB' 'KernelStack: 22336 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12408744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218776 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.675 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.676 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.677 nr_hugepages=1024 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.677 resv_hugepages=0 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.677 surplus_hugepages=0 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.677 anon_hugepages=0 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295184 kB' 'MemFree: 39770404 kB' 'MemAvailable: 43321840 kB' 'Buffers: 4096 kB' 'Cached: 14405896 kB' 'SwapCached: 0 kB' 'Active: 11477288 kB' 'Inactive: 3471256 kB' 'Active(anon): 11035040 kB' 'Inactive(anon): 0 kB' 'Active(file): 442248 kB' 'Inactive(file): 3471256 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541868 kB' 'Mapped: 213416 kB' 'Shmem: 10496488 kB' 'KReclaimable: 270140 kB' 'Slab: 898248 kB' 'SReclaimable: 270140 kB' 'SUnreclaim: 628108 kB' 'KernelStack: 22304 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487620 kB' 'Committed_AS: 12411624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218760 kB' 'VmallocChunk: 0 kB' 'Percpu: 82432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2686324 kB' 'DirectMap2M: 24262656 kB' 'DirectMap1G: 41943040 kB' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.677 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.678 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.679 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.941 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 23355604 kB' 'MemUsed: 9283536 kB' 'SwapCached: 0 kB' 'Active: 6142652 kB' 'Inactive: 85080 kB' 'Active(anon): 5927780 kB' 'Inactive(anon): 0 kB' 'Active(file): 214872 kB' 'Inactive(file): 85080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5972352 kB' 'Mapped: 87156 kB' 'AnonPages: 258548 kB' 'Shmem: 5672400 kB' 'KernelStack: 11720 kB' 'PageTables: 5024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134576 kB' 'Slab: 401604 kB' 'SReclaimable: 134576 kB' 'SUnreclaim: 267028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.942 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.943 node0=1024 expecting 1024 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.943 00:04:22.943 real 0m7.316s 00:04:22.943 user 0m2.430s 00:04:22.943 sys 0m4.621s 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.943 17:56:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.943 ************************************ 00:04:22.943 END TEST no_shrink_alloc 00:04:22.943 ************************************ 00:04:22.943 17:56:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:22.943 17:56:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:22.943 00:04:22.943 real 0m30.314s 00:04:22.943 user 0m10.234s 00:04:22.943 sys 0m18.169s 00:04:22.943 17:56:23 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.943 17:56:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.943 ************************************ 00:04:22.943 END TEST hugepages 00:04:22.943 ************************************ 00:04:22.943 17:56:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:22.943 17:56:23 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:22.943 17:56:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.943 17:56:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.943 17:56:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.943 ************************************ 00:04:22.943 START TEST driver 00:04:22.943 ************************************ 00:04:22.943 17:56:23 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:22.943 * Looking for test storage... 00:04:22.943 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:22.943 17:56:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:22.943 17:56:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.943 17:56:23 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.247 17:56:28 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:28.247 17:56:28 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.247 17:56:28 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.247 17:56:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.247 ************************************ 00:04:28.247 START TEST guess_driver 00:04:28.247 ************************************ 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:28.247 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:28.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:28.248 Looking for driver=vfio-pci 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.248 17:56:28 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:32.437 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.437 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.437 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.438 17:56:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.817 17:56:34 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.157 00:04:39.157 real 0m10.803s 00:04:39.157 user 0m2.715s 00:04:39.157 sys 0m5.383s 00:04:39.157 17:56:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.157 17:56:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.157 ************************************ 00:04:39.157 END TEST guess_driver 00:04:39.157 ************************************ 00:04:39.157 17:56:39 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:39.157 00:04:39.157 real 0m16.126s 00:04:39.157 user 0m4.254s 00:04:39.157 sys 0m8.376s 00:04:39.157 17:56:39 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.157 17:56:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.157 ************************************ 00:04:39.157 END TEST driver 00:04:39.157 ************************************ 00:04:39.157 17:56:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:39.157 17:56:39 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:39.157 17:56:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.157 17:56:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.158 17:56:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.158 ************************************ 00:04:39.158 START TEST devices 00:04:39.158 ************************************ 00:04:39.158 17:56:39 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:39.158 * Looking for test storage... 00:04:39.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:39.158 17:56:39 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:39.158 17:56:39 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:39.158 17:56:39 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.158 17:56:39 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.438 17:56:43 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:44.438 17:56:43 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:44.438 17:56:43 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:44.438 17:56:43 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:44.438 No valid GPT data, bailing 00:04:44.438 17:56:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.438 17:56:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:44.438 17:56:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:44.438 17:56:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:44.438 17:56:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:44.438 17:56:44 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:44.438 17:56:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:44.438 17:56:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.438 17:56:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.438 17:56:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.438 ************************************ 00:04:44.438 START TEST nvme_mount 00:04:44.438 ************************************ 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.438 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:44.439 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:44.439 17:56:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:45.007 Creating new GPT entries in memory. 00:04:45.007 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.007 other utilities. 00:04:45.007 17:56:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.007 17:56:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.007 17:56:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.007 17:56:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.007 17:56:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:45.946 Creating new GPT entries in memory. 00:04:45.946 The operation has completed successfully. 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1446078 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.946 17:56:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.239 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.240 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.240 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.499 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.499 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.499 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.499 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.499 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:49.499 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:49.499 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.499 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:49.499 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.758 17:56:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.948 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.949 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.949 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.949 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:53.949 17:56:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.949 17:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.949 17:56:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.242 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.242 00:04:57.242 real 0m13.148s 00:04:57.242 user 0m3.534s 00:04:57.242 sys 0m7.334s 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.242 17:56:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:57.242 ************************************ 00:04:57.242 END TEST nvme_mount 00:04:57.242 ************************************ 00:04:57.242 17:56:57 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:57.242 17:56:57 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:57.242 17:56:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.242 17:56:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.242 17:56:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.242 ************************************ 00:04:57.242 START TEST dm_mount 00:04:57.242 ************************************ 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.242 17:56:57 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:58.181 Creating new GPT entries in memory. 00:04:58.182 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.182 other utilities. 00:04:58.182 17:56:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.182 17:56:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.182 17:56:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.182 17:56:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.182 17:56:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:59.119 Creating new GPT entries in memory. 00:04:59.119 The operation has completed successfully. 00:04:59.119 17:56:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.119 17:56:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.119 17:56:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.119 17:56:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.119 17:56:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:00.057 The operation has completed successfully. 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1451004 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.057 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.316 17:57:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.560 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.561 17:57:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.848 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:07.849 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:07.849 00:05:07.849 real 0m10.929s 00:05:07.849 user 0m2.679s 00:05:07.849 sys 0m5.264s 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.849 17:57:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:07.849 ************************************ 00:05:07.849 END TEST dm_mount 00:05:07.849 ************************************ 00:05:08.107 17:57:08 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.107 17:57:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.366 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.366 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.366 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.366 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.366 17:57:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.366 00:05:08.366 real 0m29.140s 00:05:08.366 user 0m7.876s 00:05:08.366 sys 0m15.939s 00:05:08.366 17:57:08 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.366 17:57:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.366 ************************************ 00:05:08.366 END TEST devices 00:05:08.366 ************************************ 00:05:08.366 17:57:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:08.366 00:05:08.366 real 1m43.488s 00:05:08.366 user 0m30.956s 00:05:08.366 sys 0m59.489s 00:05:08.366 17:57:08 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.366 17:57:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.366 ************************************ 00:05:08.366 END TEST setup.sh 00:05:08.366 ************************************ 00:05:08.366 17:57:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.366 17:57:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:12.556 Hugepages 00:05:12.556 node hugesize free / total 00:05:12.556 node0 1048576kB 0 / 0 00:05:12.556 node0 2048kB 2048 / 2048 00:05:12.556 node1 1048576kB 0 / 0 00:05:12.556 node1 2048kB 0 / 0 00:05:12.556 00:05:12.556 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.556 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:12.556 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:12.556 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:12.556 17:57:12 -- spdk/autotest.sh@130 -- # uname -s 00:05:12.556 17:57:12 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:12.556 17:57:12 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:12.556 17:57:12 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:16.753 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:16.753 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:18.659 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.659 17:57:18 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:19.597 17:57:19 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:19.597 17:57:19 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:19.597 17:57:19 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.597 17:57:19 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:19.597 17:57:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:19.597 17:57:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:19.597 17:57:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.597 17:57:19 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.597 17:57:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:19.597 17:57:19 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:19.597 17:57:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:19.597 17:57:19 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.886 Waiting for block devices as requested 00:05:22.886 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:22.886 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:23.144 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:23.144 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.144 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:23.144 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:23.402 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:23.402 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:23.402 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:23.661 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:23.661 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:23.661 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:23.661 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:23.920 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:23.920 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:23.920 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:24.180 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:24.180 17:57:24 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:24.180 17:57:24 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:24.180 17:57:24 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:24.180 17:57:24 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:24.180 17:57:24 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:24.180 17:57:24 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:24.180 17:57:24 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:24.439 17:57:24 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:24.439 17:57:24 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:24.439 17:57:24 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:24.439 17:57:24 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:24.439 17:57:24 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:24.439 17:57:24 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:24.439 17:57:24 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:24.439 17:57:24 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:24.439 17:57:24 -- common/autotest_common.sh@1557 -- # continue 00:05:24.439 17:57:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:24.439 17:57:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.439 17:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.439 17:57:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:24.439 17:57:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.439 17:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:24.439 17:57:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:28.702 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:28.702 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:30.609 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.609 17:57:30 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:30.609 17:57:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.609 17:57:30 -- common/autotest_common.sh@10 -- # set +x 00:05:30.609 17:57:30 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:30.609 17:57:30 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:30.609 17:57:30 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.609 17:57:30 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:30.609 17:57:30 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:30.609 17:57:30 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:30.609 17:57:30 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:30.609 17:57:30 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:30.609 17:57:30 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.609 17:57:30 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:30.609 17:57:30 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:30.609 17:57:30 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:30.609 17:57:30 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:30.609 17:57:30 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:30.609 17:57:30 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:30.609 17:57:30 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:30.609 17:57:30 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:30.609 17:57:30 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:30.609 17:57:30 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:30.609 17:57:30 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:30.609 17:57:30 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1462852 00:05:30.609 17:57:30 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.609 17:57:30 -- common/autotest_common.sh@1598 -- # waitforlisten 1462852 00:05:30.609 17:57:30 -- common/autotest_common.sh@829 -- # '[' -z 1462852 ']' 00:05:30.609 17:57:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.609 17:57:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.609 17:57:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.609 17:57:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.609 17:57:30 -- common/autotest_common.sh@10 -- # set +x 00:05:30.609 [2024-07-15 17:57:31.007480] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:05:30.609 [2024-07-15 17:57:31.007538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462852 ] 00:05:30.868 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.868 [2024-07-15 17:57:31.088976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.868 [2024-07-15 17:57:31.162581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.437 17:57:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.437 17:57:31 -- common/autotest_common.sh@862 -- # return 0 00:05:31.437 17:57:31 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:31.437 17:57:31 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:31.437 17:57:31 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:34.727 nvme0n1 00:05:34.727 17:57:34 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:34.727 [2024-07-15 17:57:34.974872] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:34.727 request: 00:05:34.727 { 00:05:34.727 "nvme_ctrlr_name": "nvme0", 00:05:34.727 "password": "test", 00:05:34.727 "method": "bdev_nvme_opal_revert", 00:05:34.727 "req_id": 1 00:05:34.727 } 00:05:34.727 Got JSON-RPC error response 00:05:34.727 response: 00:05:34.727 { 00:05:34.727 "code": -32602, 00:05:34.727 "message": "Invalid parameters" 00:05:34.727 } 00:05:34.727 17:57:34 -- common/autotest_common.sh@1604 -- # true 00:05:34.727 17:57:34 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:34.727 17:57:34 -- common/autotest_common.sh@1608 -- # killprocess 1462852 00:05:34.727 17:57:34 -- common/autotest_common.sh@948 -- # '[' -z 1462852 ']' 00:05:34.727 17:57:34 -- common/autotest_common.sh@952 -- # kill -0 1462852 00:05:34.727 17:57:34 -- common/autotest_common.sh@953 -- # uname 00:05:34.727 17:57:34 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.727 17:57:34 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462852 00:05:34.727 17:57:35 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.727 17:57:35 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.727 17:57:35 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462852' 00:05:34.727 killing process with pid 1462852 00:05:34.727 17:57:35 -- common/autotest_common.sh@967 -- # kill 1462852 00:05:34.727 17:57:35 -- common/autotest_common.sh@972 -- # wait 1462852 00:05:37.261 17:57:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:37.261 17:57:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:37.261 17:57:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:37.261 17:57:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:37.261 17:57:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:37.261 17:57:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.261 17:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.261 17:57:37 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:37.261 17:57:37 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:37.261 17:57:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.261 17:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.261 17:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:37.261 ************************************ 00:05:37.261 START TEST env 00:05:37.261 ************************************ 00:05:37.261 17:57:37 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:37.520 * Looking for test storage... 00:05:37.520 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:37.520 17:57:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.520 17:57:37 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.520 17:57:37 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.520 17:57:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.520 ************************************ 00:05:37.520 START TEST env_memory 00:05:37.520 ************************************ 00:05:37.520 17:57:37 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.520 00:05:37.520 00:05:37.520 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.520 http://cunit.sourceforge.net/ 00:05:37.520 00:05:37.520 00:05:37.520 Suite: memory 00:05:37.520 Test: alloc and free memory map ...[2024-07-15 17:57:37.833173] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.520 passed 00:05:37.520 Test: mem map translation ...[2024-07-15 17:57:37.851725] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.520 [2024-07-15 17:57:37.851741] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.520 [2024-07-15 17:57:37.851778] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.520 [2024-07-15 17:57:37.851786] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.520 passed 00:05:37.520 Test: mem map registration ...[2024-07-15 17:57:37.887027] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.520 [2024-07-15 17:57:37.887042] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.520 passed 00:05:37.780 Test: mem map adjacent registrations ...passed 00:05:37.780 00:05:37.780 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.780 suites 1 1 n/a 0 0 00:05:37.780 tests 4 4 4 0 0 00:05:37.780 asserts 152 152 152 0 n/a 00:05:37.780 00:05:37.780 Elapsed time = 0.131 seconds 00:05:37.780 00:05:37.780 real 0m0.145s 00:05:37.780 user 0m0.136s 00:05:37.780 sys 0m0.009s 00:05:37.780 17:57:37 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.780 17:57:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.780 ************************************ 00:05:37.780 END TEST env_memory 00:05:37.780 ************************************ 00:05:37.780 17:57:37 env -- common/autotest_common.sh@1142 -- # return 0 00:05:37.780 17:57:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.780 17:57:37 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.780 17:57:37 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.780 17:57:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.780 ************************************ 00:05:37.780 START TEST env_vtophys 00:05:37.780 ************************************ 00:05:37.780 17:57:38 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.780 EAL: lib.eal log level changed from notice to debug 00:05:37.780 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.780 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.780 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.780 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.780 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.780 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.780 EAL: Detected lcore 6 as core 6 on socket 0 00:05:37.780 EAL: Detected lcore 7 as core 8 on socket 0 00:05:37.780 EAL: Detected lcore 8 as core 9 on socket 0 00:05:37.780 EAL: Detected lcore 9 as core 10 on socket 0 00:05:37.780 EAL: Detected lcore 10 as core 11 on socket 0 00:05:37.780 EAL: Detected lcore 11 as core 12 on socket 0 00:05:37.780 EAL: Detected lcore 12 as core 13 on socket 0 00:05:37.780 EAL: Detected lcore 13 as core 14 on socket 0 00:05:37.780 EAL: Detected lcore 14 as core 16 on socket 0 00:05:37.780 EAL: Detected lcore 15 as core 17 on socket 0 00:05:37.780 EAL: Detected lcore 16 as core 18 on socket 0 00:05:37.780 EAL: Detected lcore 17 as core 19 on socket 0 00:05:37.780 EAL: Detected lcore 18 as core 20 on socket 0 00:05:37.780 EAL: Detected lcore 19 as core 21 on socket 0 00:05:37.780 EAL: Detected lcore 20 as core 22 on socket 0 00:05:37.780 EAL: Detected lcore 21 as core 24 on socket 0 00:05:37.780 EAL: Detected lcore 22 as core 25 on socket 0 00:05:37.780 EAL: Detected lcore 23 as core 26 on socket 0 00:05:37.780 EAL: Detected lcore 24 as core 27 on socket 0 00:05:37.780 EAL: Detected lcore 25 as core 28 on socket 0 00:05:37.780 EAL: Detected lcore 26 as core 29 on socket 0 00:05:37.780 EAL: Detected lcore 27 as core 30 on socket 0 00:05:37.780 EAL: Detected lcore 28 as core 0 on socket 1 00:05:37.780 EAL: Detected lcore 29 as core 1 on socket 1 00:05:37.780 EAL: Detected lcore 30 as core 2 on socket 1 00:05:37.780 EAL: Detected lcore 31 as core 3 on socket 1 00:05:37.780 EAL: Detected lcore 32 as core 4 on socket 1 00:05:37.780 EAL: Detected lcore 33 as core 5 on socket 1 00:05:37.780 EAL: Detected lcore 34 as core 6 on socket 1 00:05:37.780 EAL: Detected lcore 35 as core 8 on socket 1 00:05:37.780 EAL: Detected lcore 36 as core 9 on socket 1 00:05:37.780 EAL: Detected lcore 37 as core 10 on socket 1 00:05:37.780 EAL: Detected lcore 38 as core 11 on socket 1 00:05:37.780 EAL: Detected lcore 39 as core 12 on socket 1 00:05:37.780 EAL: Detected lcore 40 as core 13 on socket 1 00:05:37.780 EAL: Detected lcore 41 as core 14 on socket 1 00:05:37.780 EAL: Detected lcore 42 as core 16 on socket 1 00:05:37.780 EAL: Detected lcore 43 as core 17 on socket 1 00:05:37.780 EAL: Detected lcore 44 as core 18 on socket 1 00:05:37.780 EAL: Detected lcore 45 as core 19 on socket 1 00:05:37.780 EAL: Detected lcore 46 as core 20 on socket 1 00:05:37.780 EAL: Detected lcore 47 as core 21 on socket 1 00:05:37.780 EAL: Detected lcore 48 as core 22 on socket 1 00:05:37.780 EAL: Detected lcore 49 as core 24 on socket 1 00:05:37.780 EAL: Detected lcore 50 as core 25 on socket 1 00:05:37.780 EAL: Detected lcore 51 as core 26 on socket 1 00:05:37.780 EAL: Detected lcore 52 as core 27 on socket 1 00:05:37.780 EAL: Detected lcore 53 as core 28 on socket 1 00:05:37.780 EAL: Detected lcore 54 as core 29 on socket 1 00:05:37.780 EAL: Detected lcore 55 as core 30 on socket 1 00:05:37.780 EAL: Detected lcore 56 as core 0 on socket 0 00:05:37.780 EAL: Detected lcore 57 as core 1 on socket 0 00:05:37.780 EAL: Detected lcore 58 as core 2 on socket 0 00:05:37.780 EAL: Detected lcore 59 as core 3 on socket 0 00:05:37.781 EAL: Detected lcore 60 as core 4 on socket 0 00:05:37.781 EAL: Detected lcore 61 as core 5 on socket 0 00:05:37.781 EAL: Detected lcore 62 as core 6 on socket 0 00:05:37.781 EAL: Detected lcore 63 as core 8 on socket 0 00:05:37.781 EAL: Detected lcore 64 as core 9 on socket 0 00:05:37.781 EAL: Detected lcore 65 as core 10 on socket 0 00:05:37.781 EAL: Detected lcore 66 as core 11 on socket 0 00:05:37.781 EAL: Detected lcore 67 as core 12 on socket 0 00:05:37.781 EAL: Detected lcore 68 as core 13 on socket 0 00:05:37.781 EAL: Detected lcore 69 as core 14 on socket 0 00:05:37.781 EAL: Detected lcore 70 as core 16 on socket 0 00:05:37.781 EAL: Detected lcore 71 as core 17 on socket 0 00:05:37.781 EAL: Detected lcore 72 as core 18 on socket 0 00:05:37.781 EAL: Detected lcore 73 as core 19 on socket 0 00:05:37.781 EAL: Detected lcore 74 as core 20 on socket 0 00:05:37.781 EAL: Detected lcore 75 as core 21 on socket 0 00:05:37.781 EAL: Detected lcore 76 as core 22 on socket 0 00:05:37.781 EAL: Detected lcore 77 as core 24 on socket 0 00:05:37.781 EAL: Detected lcore 78 as core 25 on socket 0 00:05:37.781 EAL: Detected lcore 79 as core 26 on socket 0 00:05:37.781 EAL: Detected lcore 80 as core 27 on socket 0 00:05:37.781 EAL: Detected lcore 81 as core 28 on socket 0 00:05:37.781 EAL: Detected lcore 82 as core 29 on socket 0 00:05:37.781 EAL: Detected lcore 83 as core 30 on socket 0 00:05:37.781 EAL: Detected lcore 84 as core 0 on socket 1 00:05:37.781 EAL: Detected lcore 85 as core 1 on socket 1 00:05:37.781 EAL: Detected lcore 86 as core 2 on socket 1 00:05:37.781 EAL: Detected lcore 87 as core 3 on socket 1 00:05:37.781 EAL: Detected lcore 88 as core 4 on socket 1 00:05:37.781 EAL: Detected lcore 89 as core 5 on socket 1 00:05:37.781 EAL: Detected lcore 90 as core 6 on socket 1 00:05:37.781 EAL: Detected lcore 91 as core 8 on socket 1 00:05:37.781 EAL: Detected lcore 92 as core 9 on socket 1 00:05:37.781 EAL: Detected lcore 93 as core 10 on socket 1 00:05:37.781 EAL: Detected lcore 94 as core 11 on socket 1 00:05:37.781 EAL: Detected lcore 95 as core 12 on socket 1 00:05:37.781 EAL: Detected lcore 96 as core 13 on socket 1 00:05:37.781 EAL: Detected lcore 97 as core 14 on socket 1 00:05:37.781 EAL: Detected lcore 98 as core 16 on socket 1 00:05:37.781 EAL: Detected lcore 99 as core 17 on socket 1 00:05:37.781 EAL: Detected lcore 100 as core 18 on socket 1 00:05:37.781 EAL: Detected lcore 101 as core 19 on socket 1 00:05:37.781 EAL: Detected lcore 102 as core 20 on socket 1 00:05:37.781 EAL: Detected lcore 103 as core 21 on socket 1 00:05:37.781 EAL: Detected lcore 104 as core 22 on socket 1 00:05:37.781 EAL: Detected lcore 105 as core 24 on socket 1 00:05:37.781 EAL: Detected lcore 106 as core 25 on socket 1 00:05:37.781 EAL: Detected lcore 107 as core 26 on socket 1 00:05:37.781 EAL: Detected lcore 108 as core 27 on socket 1 00:05:37.781 EAL: Detected lcore 109 as core 28 on socket 1 00:05:37.781 EAL: Detected lcore 110 as core 29 on socket 1 00:05:37.781 EAL: Detected lcore 111 as core 30 on socket 1 00:05:37.781 EAL: Maximum logical cores by configuration: 128 00:05:37.781 EAL: Detected CPU lcores: 112 00:05:37.781 EAL: Detected NUMA nodes: 2 00:05:37.781 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:37.781 EAL: Detected shared linkage of DPDK 00:05:37.781 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.781 EAL: Bus pci wants IOVA as 'DC' 00:05:37.781 EAL: Buses did not request a specific IOVA mode. 00:05:37.781 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.781 EAL: Selected IOVA mode 'VA' 00:05:37.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.781 EAL: Probing VFIO support... 00:05:37.781 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.781 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.781 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.781 EAL: VFIO support initialized 00:05:37.781 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.781 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.781 EAL: Setting up physically contiguous memory... 00:05:37.781 EAL: Setting maximum number of open files to 524288 00:05:37.781 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.781 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.781 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.781 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.781 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.781 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.781 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.781 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.781 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.781 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.781 EAL: Hugepages will be freed exactly as allocated. 00:05:37.781 EAL: No shared files mode enabled, IPC is disabled 00:05:37.781 EAL: No shared files mode enabled, IPC is disabled 00:05:37.781 EAL: TSC frequency is ~2500000 KHz 00:05:37.781 EAL: Main lcore 0 is ready (tid=7f2219c20a00;cpuset=[0]) 00:05:37.781 EAL: Trying to obtain current memory policy. 00:05:37.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.781 EAL: Restoring previous memory policy: 0 00:05:37.781 EAL: request: mp_malloc_sync 00:05:37.781 EAL: No shared files mode enabled, IPC is disabled 00:05:37.781 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.781 EAL: No shared files mode enabled, IPC is disabled 00:05:37.781 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.781 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.781 00:05:37.781 00:05:37.781 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.781 http://cunit.sourceforge.net/ 00:05:37.781 00:05:37.781 00:05:37.782 Suite: components_suite 00:05:37.782 Test: vtophys_malloc_test ...passed 00:05:37.782 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.782 EAL: Restoring previous memory policy: 4 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.782 EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.782 EAL: Restoring previous memory policy: 4 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.782 EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.782 EAL: Restoring previous memory policy: 4 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.782 EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.782 EAL: Restoring previous memory policy: 4 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.782 EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.782 EAL: Restoring previous memory policy: 4 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.782 EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.782 EAL: Restoring previous memory policy: 4 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.782 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.782 EAL: request: mp_malloc_sync 00:05:37.782 EAL: No shared files mode enabled, IPC is disabled 00:05:37.782 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.782 EAL: Trying to obtain current memory policy. 00:05:37.782 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.041 EAL: Restoring previous memory policy: 4 00:05:38.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.041 EAL: request: mp_malloc_sync 00:05:38.041 EAL: No shared files mode enabled, IPC is disabled 00:05:38.041 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.041 EAL: request: mp_malloc_sync 00:05:38.041 EAL: No shared files mode enabled, IPC is disabled 00:05:38.041 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.041 EAL: Trying to obtain current memory policy. 00:05:38.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.041 EAL: Restoring previous memory policy: 4 00:05:38.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.041 EAL: request: mp_malloc_sync 00:05:38.041 EAL: No shared files mode enabled, IPC is disabled 00:05:38.041 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.041 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.041 EAL: request: mp_malloc_sync 00:05:38.041 EAL: No shared files mode enabled, IPC is disabled 00:05:38.041 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.041 EAL: Trying to obtain current memory policy. 00:05:38.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.300 EAL: Restoring previous memory policy: 4 00:05:38.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.300 EAL: request: mp_malloc_sync 00:05:38.300 EAL: No shared files mode enabled, IPC is disabled 00:05:38.300 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.300 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.300 EAL: request: mp_malloc_sync 00:05:38.300 EAL: No shared files mode enabled, IPC is disabled 00:05:38.300 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.300 EAL: Trying to obtain current memory policy. 00:05:38.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.559 EAL: Restoring previous memory policy: 4 00:05:38.559 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.559 EAL: request: mp_malloc_sync 00:05:38.559 EAL: No shared files mode enabled, IPC is disabled 00:05:38.559 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.817 EAL: request: mp_malloc_sync 00:05:38.817 EAL: No shared files mode enabled, IPC is disabled 00:05:38.817 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.817 passed 00:05:38.817 00:05:38.817 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.817 suites 1 1 n/a 0 0 00:05:38.817 tests 2 2 2 0 0 00:05:38.817 asserts 497 497 497 0 n/a 00:05:38.817 00:05:38.817 Elapsed time = 0.965 seconds 00:05:38.817 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.817 EAL: request: mp_malloc_sync 00:05:38.817 EAL: No shared files mode enabled, IPC is disabled 00:05:38.817 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.817 EAL: No shared files mode enabled, IPC is disabled 00:05:38.817 EAL: No shared files mode enabled, IPC is disabled 00:05:38.817 EAL: No shared files mode enabled, IPC is disabled 00:05:38.817 00:05:38.817 real 0m1.115s 00:05:38.817 user 0m0.647s 00:05:38.817 sys 0m0.431s 00:05:38.817 17:57:39 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.817 17:57:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.817 ************************************ 00:05:38.817 END TEST env_vtophys 00:05:38.817 ************************************ 00:05:38.817 17:57:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:38.817 17:57:39 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.817 17:57:39 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.817 17:57:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.817 17:57:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.817 ************************************ 00:05:38.817 START TEST env_pci 00:05:38.817 ************************************ 00:05:38.817 17:57:39 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.817 00:05:38.817 00:05:38.817 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.817 http://cunit.sourceforge.net/ 00:05:38.817 00:05:38.817 00:05:38.817 Suite: pci 00:05:38.817 Test: pci_hook ...[2024-07-15 17:57:39.214929] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1464405 has claimed it 00:05:39.075 EAL: Cannot find device (10000:00:01.0) 00:05:39.075 EAL: Failed to attach device on primary process 00:05:39.075 passed 00:05:39.075 00:05:39.075 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.075 suites 1 1 n/a 0 0 00:05:39.075 tests 1 1 1 0 0 00:05:39.075 asserts 25 25 25 0 n/a 00:05:39.075 00:05:39.075 Elapsed time = 0.033 seconds 00:05:39.075 00:05:39.075 real 0m0.044s 00:05:39.075 user 0m0.012s 00:05:39.075 sys 0m0.032s 00:05:39.075 17:57:39 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.075 17:57:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:39.075 ************************************ 00:05:39.075 END TEST env_pci 00:05:39.076 ************************************ 00:05:39.076 17:57:39 env -- common/autotest_common.sh@1142 -- # return 0 00:05:39.076 17:57:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.076 17:57:39 env -- env/env.sh@15 -- # uname 00:05:39.076 17:57:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.076 17:57:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.076 17:57:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.076 17:57:39 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:39.076 17:57:39 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.076 17:57:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.076 ************************************ 00:05:39.076 START TEST env_dpdk_post_init 00:05:39.076 ************************************ 00:05:39.076 17:57:39 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.076 EAL: Detected CPU lcores: 112 00:05:39.076 EAL: Detected NUMA nodes: 2 00:05:39.076 EAL: Detected shared linkage of DPDK 00:05:39.076 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.076 EAL: Selected IOVA mode 'VA' 00:05:39.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.076 EAL: VFIO support initialized 00:05:39.076 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.334 EAL: Using IOMMU type 1 (Type 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:39.334 EAL: Ignore mapping IO port bar(1) 00:05:39.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:40.269 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:44.457 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:44.457 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:44.457 Starting DPDK initialization... 00:05:44.457 Starting SPDK post initialization... 00:05:44.457 SPDK NVMe probe 00:05:44.457 Attaching to 0000:d8:00.0 00:05:44.457 Attached to 0000:d8:00.0 00:05:44.457 Cleaning up... 00:05:44.457 00:05:44.457 real 0m5.377s 00:05:44.457 user 0m3.952s 00:05:44.457 sys 0m0.472s 00:05:44.457 17:57:44 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.457 17:57:44 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.457 ************************************ 00:05:44.457 END TEST env_dpdk_post_init 00:05:44.457 ************************************ 00:05:44.457 17:57:44 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.457 17:57:44 env -- env/env.sh@26 -- # uname 00:05:44.457 17:57:44 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.458 17:57:44 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.458 17:57:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.458 17:57:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.458 17:57:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.458 ************************************ 00:05:44.458 START TEST env_mem_callbacks 00:05:44.458 ************************************ 00:05:44.458 17:57:44 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.458 EAL: Detected CPU lcores: 112 00:05:44.458 EAL: Detected NUMA nodes: 2 00:05:44.458 EAL: Detected shared linkage of DPDK 00:05:44.458 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.717 EAL: Selected IOVA mode 'VA' 00:05:44.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.717 EAL: VFIO support initialized 00:05:44.717 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.717 00:05:44.717 00:05:44.717 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.717 http://cunit.sourceforge.net/ 00:05:44.717 00:05:44.717 00:05:44.717 Suite: memory 00:05:44.717 Test: test ... 00:05:44.717 register 0x200000200000 2097152 00:05:44.717 malloc 3145728 00:05:44.717 register 0x200000400000 4194304 00:05:44.717 buf 0x200000500000 len 3145728 PASSED 00:05:44.717 malloc 64 00:05:44.717 buf 0x2000004fff40 len 64 PASSED 00:05:44.717 malloc 4194304 00:05:44.717 register 0x200000800000 6291456 00:05:44.717 buf 0x200000a00000 len 4194304 PASSED 00:05:44.717 free 0x200000500000 3145728 00:05:44.717 free 0x2000004fff40 64 00:05:44.717 unregister 0x200000400000 4194304 PASSED 00:05:44.717 free 0x200000a00000 4194304 00:05:44.717 unregister 0x200000800000 6291456 PASSED 00:05:44.717 malloc 8388608 00:05:44.717 register 0x200000400000 10485760 00:05:44.717 buf 0x200000600000 len 8388608 PASSED 00:05:44.717 free 0x200000600000 8388608 00:05:44.717 unregister 0x200000400000 10485760 PASSED 00:05:44.717 passed 00:05:44.717 00:05:44.717 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.717 suites 1 1 n/a 0 0 00:05:44.717 tests 1 1 1 0 0 00:05:44.717 asserts 15 15 15 0 n/a 00:05:44.717 00:05:44.717 Elapsed time = 0.005 seconds 00:05:44.717 00:05:44.717 real 0m0.073s 00:05:44.717 user 0m0.019s 00:05:44.717 sys 0m0.054s 00:05:44.717 17:57:44 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.717 17:57:44 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.717 ************************************ 00:05:44.717 END TEST env_mem_callbacks 00:05:44.717 ************************************ 00:05:44.717 17:57:44 env -- common/autotest_common.sh@1142 -- # return 0 00:05:44.717 00:05:44.717 real 0m7.269s 00:05:44.717 user 0m4.966s 00:05:44.717 sys 0m1.354s 00:05:44.717 17:57:44 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.717 17:57:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.717 ************************************ 00:05:44.717 END TEST env 00:05:44.717 ************************************ 00:05:44.717 17:57:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:44.717 17:57:44 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.717 17:57:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.717 17:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.717 17:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:44.717 ************************************ 00:05:44.717 START TEST rpc 00:05:44.717 ************************************ 00:05:44.718 17:57:44 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:44.718 * Looking for test storage... 00:05:44.718 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:44.718 17:57:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1465589 00:05:44.718 17:57:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:44.718 17:57:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.718 17:57:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1465589 00:05:44.718 17:57:45 rpc -- common/autotest_common.sh@829 -- # '[' -z 1465589 ']' 00:05:44.718 17:57:45 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.718 17:57:45 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.718 17:57:45 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.718 17:57:45 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.718 17:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.977 [2024-07-15 17:57:45.146943] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:05:44.977 [2024-07-15 17:57:45.146996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465589 ] 00:05:44.977 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.977 [2024-07-15 17:57:45.229410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.977 [2024-07-15 17:57:45.302232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.977 [2024-07-15 17:57:45.302275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1465589' to capture a snapshot of events at runtime. 00:05:44.977 [2024-07-15 17:57:45.302285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.977 [2024-07-15 17:57:45.302293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.977 [2024-07-15 17:57:45.302302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1465589 for offline analysis/debug. 00:05:44.977 [2024-07-15 17:57:45.302326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.584 17:57:45 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.584 17:57:45 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.584 17:57:45 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:45.584 17:57:45 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:45.584 17:57:45 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.584 17:57:45 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.584 17:57:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.584 17:57:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.584 17:57:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.584 ************************************ 00:05:45.584 START TEST rpc_integrity 00:05:45.584 ************************************ 00:05:45.584 17:57:45 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:45.584 17:57:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.584 17:57:45 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.584 17:57:45 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.584 17:57:45 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.584 17:57:45 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.584 17:57:45 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.844 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.844 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.844 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.844 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.844 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.844 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.844 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.844 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.844 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.844 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.844 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.844 { 00:05:45.844 "name": "Malloc0", 00:05:45.844 "aliases": [ 00:05:45.844 "4e2e40b5-fb9d-43b8-868d-cfd8238b5fa3" 00:05:45.844 ], 00:05:45.844 "product_name": "Malloc disk", 00:05:45.844 "block_size": 512, 00:05:45.844 "num_blocks": 16384, 00:05:45.844 "uuid": "4e2e40b5-fb9d-43b8-868d-cfd8238b5fa3", 00:05:45.844 "assigned_rate_limits": { 00:05:45.844 "rw_ios_per_sec": 0, 00:05:45.844 "rw_mbytes_per_sec": 0, 00:05:45.844 "r_mbytes_per_sec": 0, 00:05:45.844 "w_mbytes_per_sec": 0 00:05:45.844 }, 00:05:45.844 "claimed": false, 00:05:45.844 "zoned": false, 00:05:45.844 "supported_io_types": { 00:05:45.844 "read": true, 00:05:45.844 "write": true, 00:05:45.844 "unmap": true, 00:05:45.844 "flush": true, 00:05:45.844 "reset": true, 00:05:45.844 "nvme_admin": false, 00:05:45.844 "nvme_io": false, 00:05:45.844 "nvme_io_md": false, 00:05:45.844 "write_zeroes": true, 00:05:45.844 "zcopy": true, 00:05:45.844 "get_zone_info": false, 00:05:45.844 "zone_management": false, 00:05:45.844 "zone_append": false, 00:05:45.844 "compare": false, 00:05:45.844 "compare_and_write": false, 00:05:45.844 "abort": true, 00:05:45.844 "seek_hole": false, 00:05:45.844 "seek_data": false, 00:05:45.844 "copy": true, 00:05:45.844 "nvme_iov_md": false 00:05:45.844 }, 00:05:45.844 "memory_domains": [ 00:05:45.844 { 00:05:45.844 "dma_device_id": "system", 00:05:45.844 "dma_device_type": 1 00:05:45.844 }, 00:05:45.844 { 00:05:45.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.844 "dma_device_type": 2 00:05:45.844 } 00:05:45.844 ], 00:05:45.844 "driver_specific": {} 00:05:45.844 } 00:05:45.844 ]' 00:05:45.844 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 [2024-07-15 17:57:46.098817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.845 [2024-07-15 17:57:46.098849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.845 [2024-07-15 17:57:46.098862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x200c010 00:05:45.845 [2024-07-15 17:57:46.098870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.845 [2024-07-15 17:57:46.099950] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.845 [2024-07-15 17:57:46.099971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.845 Passthru0 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.845 { 00:05:45.845 "name": "Malloc0", 00:05:45.845 "aliases": [ 00:05:45.845 "4e2e40b5-fb9d-43b8-868d-cfd8238b5fa3" 00:05:45.845 ], 00:05:45.845 "product_name": "Malloc disk", 00:05:45.845 "block_size": 512, 00:05:45.845 "num_blocks": 16384, 00:05:45.845 "uuid": "4e2e40b5-fb9d-43b8-868d-cfd8238b5fa3", 00:05:45.845 "assigned_rate_limits": { 00:05:45.845 "rw_ios_per_sec": 0, 00:05:45.845 "rw_mbytes_per_sec": 0, 00:05:45.845 "r_mbytes_per_sec": 0, 00:05:45.845 "w_mbytes_per_sec": 0 00:05:45.845 }, 00:05:45.845 "claimed": true, 00:05:45.845 "claim_type": "exclusive_write", 00:05:45.845 "zoned": false, 00:05:45.845 "supported_io_types": { 00:05:45.845 "read": true, 00:05:45.845 "write": true, 00:05:45.845 "unmap": true, 00:05:45.845 "flush": true, 00:05:45.845 "reset": true, 00:05:45.845 "nvme_admin": false, 00:05:45.845 "nvme_io": false, 00:05:45.845 "nvme_io_md": false, 00:05:45.845 "write_zeroes": true, 00:05:45.845 "zcopy": true, 00:05:45.845 "get_zone_info": false, 00:05:45.845 "zone_management": false, 00:05:45.845 "zone_append": false, 00:05:45.845 "compare": false, 00:05:45.845 "compare_and_write": false, 00:05:45.845 "abort": true, 00:05:45.845 "seek_hole": false, 00:05:45.845 "seek_data": false, 00:05:45.845 "copy": true, 00:05:45.845 "nvme_iov_md": false 00:05:45.845 }, 00:05:45.845 "memory_domains": [ 00:05:45.845 { 00:05:45.845 "dma_device_id": "system", 00:05:45.845 "dma_device_type": 1 00:05:45.845 }, 00:05:45.845 { 00:05:45.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.845 "dma_device_type": 2 00:05:45.845 } 00:05:45.845 ], 00:05:45.845 "driver_specific": {} 00:05:45.845 }, 00:05:45.845 { 00:05:45.845 "name": "Passthru0", 00:05:45.845 "aliases": [ 00:05:45.845 "3bc9ac81-589a-5fa7-be08-f283651931eb" 00:05:45.845 ], 00:05:45.845 "product_name": "passthru", 00:05:45.845 "block_size": 512, 00:05:45.845 "num_blocks": 16384, 00:05:45.845 "uuid": "3bc9ac81-589a-5fa7-be08-f283651931eb", 00:05:45.845 "assigned_rate_limits": { 00:05:45.845 "rw_ios_per_sec": 0, 00:05:45.845 "rw_mbytes_per_sec": 0, 00:05:45.845 "r_mbytes_per_sec": 0, 00:05:45.845 "w_mbytes_per_sec": 0 00:05:45.845 }, 00:05:45.845 "claimed": false, 00:05:45.845 "zoned": false, 00:05:45.845 "supported_io_types": { 00:05:45.845 "read": true, 00:05:45.845 "write": true, 00:05:45.845 "unmap": true, 00:05:45.845 "flush": true, 00:05:45.845 "reset": true, 00:05:45.845 "nvme_admin": false, 00:05:45.845 "nvme_io": false, 00:05:45.845 "nvme_io_md": false, 00:05:45.845 "write_zeroes": true, 00:05:45.845 "zcopy": true, 00:05:45.845 "get_zone_info": false, 00:05:45.845 "zone_management": false, 00:05:45.845 "zone_append": false, 00:05:45.845 "compare": false, 00:05:45.845 "compare_and_write": false, 00:05:45.845 "abort": true, 00:05:45.845 "seek_hole": false, 00:05:45.845 "seek_data": false, 00:05:45.845 "copy": true, 00:05:45.845 "nvme_iov_md": false 00:05:45.845 }, 00:05:45.845 "memory_domains": [ 00:05:45.845 { 00:05:45.845 "dma_device_id": "system", 00:05:45.845 "dma_device_type": 1 00:05:45.845 }, 00:05:45.845 { 00:05:45.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.845 "dma_device_type": 2 00:05:45.845 } 00:05:45.845 ], 00:05:45.845 "driver_specific": { 00:05:45.845 "passthru": { 00:05:45.845 "name": "Passthru0", 00:05:45.845 "base_bdev_name": "Malloc0" 00:05:45.845 } 00:05:45.845 } 00:05:45.845 } 00:05:45.845 ]' 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.845 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.845 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.106 17:57:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.106 00:05:46.106 real 0m0.302s 00:05:46.106 user 0m0.177s 00:05:46.106 sys 0m0.058s 00:05:46.106 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.106 17:57:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 ************************************ 00:05:46.106 END TEST rpc_integrity 00:05:46.106 ************************************ 00:05:46.106 17:57:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.106 17:57:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.106 17:57:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.106 17:57:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.106 17:57:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 ************************************ 00:05:46.106 START TEST rpc_plugins 00:05:46.106 ************************************ 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.106 { 00:05:46.106 "name": "Malloc1", 00:05:46.106 "aliases": [ 00:05:46.106 "062b6310-265f-4c52-b0af-41b9261fc960" 00:05:46.106 ], 00:05:46.106 "product_name": "Malloc disk", 00:05:46.106 "block_size": 4096, 00:05:46.106 "num_blocks": 256, 00:05:46.106 "uuid": "062b6310-265f-4c52-b0af-41b9261fc960", 00:05:46.106 "assigned_rate_limits": { 00:05:46.106 "rw_ios_per_sec": 0, 00:05:46.106 "rw_mbytes_per_sec": 0, 00:05:46.106 "r_mbytes_per_sec": 0, 00:05:46.106 "w_mbytes_per_sec": 0 00:05:46.106 }, 00:05:46.106 "claimed": false, 00:05:46.106 "zoned": false, 00:05:46.106 "supported_io_types": { 00:05:46.106 "read": true, 00:05:46.106 "write": true, 00:05:46.106 "unmap": true, 00:05:46.106 "flush": true, 00:05:46.106 "reset": true, 00:05:46.106 "nvme_admin": false, 00:05:46.106 "nvme_io": false, 00:05:46.106 "nvme_io_md": false, 00:05:46.106 "write_zeroes": true, 00:05:46.106 "zcopy": true, 00:05:46.106 "get_zone_info": false, 00:05:46.106 "zone_management": false, 00:05:46.106 "zone_append": false, 00:05:46.106 "compare": false, 00:05:46.106 "compare_and_write": false, 00:05:46.106 "abort": true, 00:05:46.106 "seek_hole": false, 00:05:46.106 "seek_data": false, 00:05:46.106 "copy": true, 00:05:46.106 "nvme_iov_md": false 00:05:46.106 }, 00:05:46.106 "memory_domains": [ 00:05:46.106 { 00:05:46.106 "dma_device_id": "system", 00:05:46.106 "dma_device_type": 1 00:05:46.106 }, 00:05:46.106 { 00:05:46.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.106 "dma_device_type": 2 00:05:46.106 } 00:05:46.106 ], 00:05:46.106 "driver_specific": {} 00:05:46.106 } 00:05:46.106 ]' 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.106 17:57:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.106 00:05:46.106 real 0m0.140s 00:05:46.106 user 0m0.083s 00:05:46.106 sys 0m0.022s 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.106 17:57:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.106 ************************************ 00:05:46.106 END TEST rpc_plugins 00:05:46.106 ************************************ 00:05:46.366 17:57:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.366 17:57:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.366 17:57:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.366 17:57:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.366 17:57:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.366 ************************************ 00:05:46.366 START TEST rpc_trace_cmd_test 00:05:46.366 ************************************ 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.366 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.366 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1465589", 00:05:46.366 "tpoint_group_mask": "0x8", 00:05:46.366 "iscsi_conn": { 00:05:46.366 "mask": "0x2", 00:05:46.366 "tpoint_mask": "0x0" 00:05:46.366 }, 00:05:46.366 "scsi": { 00:05:46.366 "mask": "0x4", 00:05:46.366 "tpoint_mask": "0x0" 00:05:46.366 }, 00:05:46.366 "bdev": { 00:05:46.366 "mask": "0x8", 00:05:46.366 "tpoint_mask": "0xffffffffffffffff" 00:05:46.366 }, 00:05:46.366 "nvmf_rdma": { 00:05:46.366 "mask": "0x10", 00:05:46.366 "tpoint_mask": "0x0" 00:05:46.366 }, 00:05:46.366 "nvmf_tcp": { 00:05:46.366 "mask": "0x20", 00:05:46.366 "tpoint_mask": "0x0" 00:05:46.366 }, 00:05:46.366 "ftl": { 00:05:46.366 "mask": "0x40", 00:05:46.366 "tpoint_mask": "0x0" 00:05:46.366 }, 00:05:46.366 "blobfs": { 00:05:46.366 "mask": "0x80", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "dsa": { 00:05:46.367 "mask": "0x200", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "thread": { 00:05:46.367 "mask": "0x400", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "nvme_pcie": { 00:05:46.367 "mask": "0x800", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "iaa": { 00:05:46.367 "mask": "0x1000", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "nvme_tcp": { 00:05:46.367 "mask": "0x2000", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "bdev_nvme": { 00:05:46.367 "mask": "0x4000", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 }, 00:05:46.367 "sock": { 00:05:46.367 "mask": "0x8000", 00:05:46.367 "tpoint_mask": "0x0" 00:05:46.367 } 00:05:46.367 }' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.367 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.627 17:57:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.627 00:05:46.627 real 0m0.214s 00:05:46.627 user 0m0.169s 00:05:46.627 sys 0m0.037s 00:05:46.627 17:57:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.627 17:57:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 ************************************ 00:05:46.627 END TEST rpc_trace_cmd_test 00:05:46.627 ************************************ 00:05:46.627 17:57:46 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.627 17:57:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.627 17:57:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.627 17:57:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.627 17:57:46 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.627 17:57:46 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.627 17:57:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 ************************************ 00:05:46.627 START TEST rpc_daemon_integrity 00:05:46.627 ************************************ 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.627 { 00:05:46.627 "name": "Malloc2", 00:05:46.627 "aliases": [ 00:05:46.627 "d8a4a109-5b80-4105-a612-388cd0205b9d" 00:05:46.627 ], 00:05:46.627 "product_name": "Malloc disk", 00:05:46.627 "block_size": 512, 00:05:46.627 "num_blocks": 16384, 00:05:46.627 "uuid": "d8a4a109-5b80-4105-a612-388cd0205b9d", 00:05:46.627 "assigned_rate_limits": { 00:05:46.627 "rw_ios_per_sec": 0, 00:05:46.627 "rw_mbytes_per_sec": 0, 00:05:46.627 "r_mbytes_per_sec": 0, 00:05:46.627 "w_mbytes_per_sec": 0 00:05:46.627 }, 00:05:46.627 "claimed": false, 00:05:46.627 "zoned": false, 00:05:46.627 "supported_io_types": { 00:05:46.627 "read": true, 00:05:46.627 "write": true, 00:05:46.627 "unmap": true, 00:05:46.627 "flush": true, 00:05:46.627 "reset": true, 00:05:46.627 "nvme_admin": false, 00:05:46.627 "nvme_io": false, 00:05:46.627 "nvme_io_md": false, 00:05:46.627 "write_zeroes": true, 00:05:46.627 "zcopy": true, 00:05:46.627 "get_zone_info": false, 00:05:46.627 "zone_management": false, 00:05:46.627 "zone_append": false, 00:05:46.627 "compare": false, 00:05:46.627 "compare_and_write": false, 00:05:46.627 "abort": true, 00:05:46.627 "seek_hole": false, 00:05:46.627 "seek_data": false, 00:05:46.627 "copy": true, 00:05:46.627 "nvme_iov_md": false 00:05:46.627 }, 00:05:46.627 "memory_domains": [ 00:05:46.627 { 00:05:46.627 "dma_device_id": "system", 00:05:46.627 "dma_device_type": 1 00:05:46.627 }, 00:05:46.627 { 00:05:46.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.627 "dma_device_type": 2 00:05:46.627 } 00:05:46.627 ], 00:05:46.627 "driver_specific": {} 00:05:46.627 } 00:05:46.627 ]' 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.627 [2024-07-15 17:57:46.985237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.627 [2024-07-15 17:57:46.985268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.627 [2024-07-15 17:57:46.985281] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21bd6f0 00:05:46.627 [2024-07-15 17:57:46.985289] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.627 [2024-07-15 17:57:46.986210] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.627 [2024-07-15 17:57:46.986231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.627 Passthru0 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.627 17:57:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.628 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.628 17:57:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.628 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.628 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.628 { 00:05:46.628 "name": "Malloc2", 00:05:46.628 "aliases": [ 00:05:46.628 "d8a4a109-5b80-4105-a612-388cd0205b9d" 00:05:46.628 ], 00:05:46.628 "product_name": "Malloc disk", 00:05:46.628 "block_size": 512, 00:05:46.628 "num_blocks": 16384, 00:05:46.628 "uuid": "d8a4a109-5b80-4105-a612-388cd0205b9d", 00:05:46.628 "assigned_rate_limits": { 00:05:46.628 "rw_ios_per_sec": 0, 00:05:46.628 "rw_mbytes_per_sec": 0, 00:05:46.628 "r_mbytes_per_sec": 0, 00:05:46.628 "w_mbytes_per_sec": 0 00:05:46.628 }, 00:05:46.628 "claimed": true, 00:05:46.628 "claim_type": "exclusive_write", 00:05:46.628 "zoned": false, 00:05:46.628 "supported_io_types": { 00:05:46.628 "read": true, 00:05:46.628 "write": true, 00:05:46.628 "unmap": true, 00:05:46.628 "flush": true, 00:05:46.628 "reset": true, 00:05:46.628 "nvme_admin": false, 00:05:46.628 "nvme_io": false, 00:05:46.628 "nvme_io_md": false, 00:05:46.628 "write_zeroes": true, 00:05:46.628 "zcopy": true, 00:05:46.628 "get_zone_info": false, 00:05:46.628 "zone_management": false, 00:05:46.628 "zone_append": false, 00:05:46.628 "compare": false, 00:05:46.628 "compare_and_write": false, 00:05:46.628 "abort": true, 00:05:46.628 "seek_hole": false, 00:05:46.628 "seek_data": false, 00:05:46.628 "copy": true, 00:05:46.628 "nvme_iov_md": false 00:05:46.628 }, 00:05:46.628 "memory_domains": [ 00:05:46.628 { 00:05:46.628 "dma_device_id": "system", 00:05:46.628 "dma_device_type": 1 00:05:46.628 }, 00:05:46.628 { 00:05:46.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.628 "dma_device_type": 2 00:05:46.628 } 00:05:46.628 ], 00:05:46.628 "driver_specific": {} 00:05:46.628 }, 00:05:46.628 { 00:05:46.628 "name": "Passthru0", 00:05:46.628 "aliases": [ 00:05:46.628 "5cf1515b-bbb2-5415-923f-15a6e0f2a4e5" 00:05:46.628 ], 00:05:46.628 "product_name": "passthru", 00:05:46.628 "block_size": 512, 00:05:46.628 "num_blocks": 16384, 00:05:46.628 "uuid": "5cf1515b-bbb2-5415-923f-15a6e0f2a4e5", 00:05:46.628 "assigned_rate_limits": { 00:05:46.628 "rw_ios_per_sec": 0, 00:05:46.628 "rw_mbytes_per_sec": 0, 00:05:46.628 "r_mbytes_per_sec": 0, 00:05:46.628 "w_mbytes_per_sec": 0 00:05:46.628 }, 00:05:46.628 "claimed": false, 00:05:46.628 "zoned": false, 00:05:46.628 "supported_io_types": { 00:05:46.628 "read": true, 00:05:46.628 "write": true, 00:05:46.628 "unmap": true, 00:05:46.628 "flush": true, 00:05:46.628 "reset": true, 00:05:46.628 "nvme_admin": false, 00:05:46.628 "nvme_io": false, 00:05:46.628 "nvme_io_md": false, 00:05:46.628 "write_zeroes": true, 00:05:46.628 "zcopy": true, 00:05:46.628 "get_zone_info": false, 00:05:46.628 "zone_management": false, 00:05:46.628 "zone_append": false, 00:05:46.628 "compare": false, 00:05:46.628 "compare_and_write": false, 00:05:46.628 "abort": true, 00:05:46.628 "seek_hole": false, 00:05:46.628 "seek_data": false, 00:05:46.628 "copy": true, 00:05:46.628 "nvme_iov_md": false 00:05:46.628 }, 00:05:46.628 "memory_domains": [ 00:05:46.628 { 00:05:46.628 "dma_device_id": "system", 00:05:46.628 "dma_device_type": 1 00:05:46.628 }, 00:05:46.628 { 00:05:46.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.628 "dma_device_type": 2 00:05:46.628 } 00:05:46.628 ], 00:05:46.628 "driver_specific": { 00:05:46.628 "passthru": { 00:05:46.628 "name": "Passthru0", 00:05:46.628 "base_bdev_name": "Malloc2" 00:05:46.628 } 00:05:46.628 } 00:05:46.628 } 00:05:46.628 ]' 00:05:46.628 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.888 00:05:46.888 real 0m0.285s 00:05:46.888 user 0m0.165s 00:05:46.888 sys 0m0.055s 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.888 17:57:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.888 ************************************ 00:05:46.888 END TEST rpc_daemon_integrity 00:05:46.888 ************************************ 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:46.888 17:57:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.888 17:57:47 rpc -- rpc/rpc.sh@84 -- # killprocess 1465589 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@948 -- # '[' -z 1465589 ']' 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@952 -- # kill -0 1465589 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@953 -- # uname 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465589 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465589' 00:05:46.888 killing process with pid 1465589 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@967 -- # kill 1465589 00:05:46.888 17:57:47 rpc -- common/autotest_common.sh@972 -- # wait 1465589 00:05:47.148 00:05:47.148 real 0m2.534s 00:05:47.148 user 0m3.174s 00:05:47.148 sys 0m0.824s 00:05:47.148 17:57:47 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.148 17:57:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.148 ************************************ 00:05:47.148 END TEST rpc 00:05:47.148 ************************************ 00:05:47.407 17:57:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.407 17:57:47 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.407 17:57:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.407 17:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.407 17:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:47.407 ************************************ 00:05:47.407 START TEST skip_rpc 00:05:47.407 ************************************ 00:05:47.407 17:57:47 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:47.407 * Looking for test storage... 00:05:47.407 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:47.407 17:57:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:47.407 17:57:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:47.407 17:57:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.407 17:57:47 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.407 17:57:47 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.407 17:57:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.407 ************************************ 00:05:47.407 START TEST skip_rpc 00:05:47.407 ************************************ 00:05:47.407 17:57:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:47.407 17:57:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.407 17:57:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1466080 00:05:47.407 17:57:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.407 17:57:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.407 [2024-07-15 17:57:47.790934] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:05:47.407 [2024-07-15 17:57:47.790978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466080 ] 00:05:47.666 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.666 [2024-07-15 17:57:47.871211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.666 [2024-07-15 17:57:47.940385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1466080 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1466080 ']' 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1466080 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1466080 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1466080' 00:05:52.943 killing process with pid 1466080 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1466080 00:05:52.943 17:57:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1466080 00:05:52.943 00:05:52.943 real 0m5.377s 00:05:52.943 user 0m5.132s 00:05:52.943 sys 0m0.284s 00:05:52.943 17:57:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.943 17:57:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.943 ************************************ 00:05:52.943 END TEST skip_rpc 00:05:52.943 ************************************ 00:05:52.943 17:57:53 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.943 17:57:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.943 17:57:53 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.943 17:57:53 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.943 17:57:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.943 ************************************ 00:05:52.943 START TEST skip_rpc_with_json 00:05:52.943 ************************************ 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1467129 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1467129 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1467129 ']' 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.943 17:57:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.943 [2024-07-15 17:57:53.263512] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:05:52.943 [2024-07-15 17:57:53.263559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467129 ] 00:05:52.943 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.203 [2024-07-15 17:57:53.345539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.203 [2024-07-15 17:57:53.418069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 [2024-07-15 17:57:54.057117] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.771 request: 00:05:53.771 { 00:05:53.771 "trtype": "tcp", 00:05:53.771 "method": "nvmf_get_transports", 00:05:53.771 "req_id": 1 00:05:53.771 } 00:05:53.771 Got JSON-RPC error response 00:05:53.771 response: 00:05:53.771 { 00:05:53.771 "code": -19, 00:05:53.771 "message": "No such device" 00:05:53.771 } 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.771 [2024-07-15 17:57:54.069221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.771 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.030 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.030 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:54.030 { 00:05:54.030 "subsystems": [ 00:05:54.030 { 00:05:54.031 "subsystem": "keyring", 00:05:54.031 "config": [] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "iobuf", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "iobuf_set_options", 00:05:54.031 "params": { 00:05:54.031 "small_pool_count": 8192, 00:05:54.031 "large_pool_count": 1024, 00:05:54.031 "small_bufsize": 8192, 00:05:54.031 "large_bufsize": 135168 00:05:54.031 } 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "sock", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "sock_set_default_impl", 00:05:54.031 "params": { 00:05:54.031 "impl_name": "posix" 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "sock_impl_set_options", 00:05:54.031 "params": { 00:05:54.031 "impl_name": "ssl", 00:05:54.031 "recv_buf_size": 4096, 00:05:54.031 "send_buf_size": 4096, 00:05:54.031 "enable_recv_pipe": true, 00:05:54.031 "enable_quickack": false, 00:05:54.031 "enable_placement_id": 0, 00:05:54.031 "enable_zerocopy_send_server": true, 00:05:54.031 "enable_zerocopy_send_client": false, 00:05:54.031 "zerocopy_threshold": 0, 00:05:54.031 "tls_version": 0, 00:05:54.031 "enable_ktls": false 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "sock_impl_set_options", 00:05:54.031 "params": { 00:05:54.031 "impl_name": "posix", 00:05:54.031 "recv_buf_size": 2097152, 00:05:54.031 "send_buf_size": 2097152, 00:05:54.031 "enable_recv_pipe": true, 00:05:54.031 "enable_quickack": false, 00:05:54.031 "enable_placement_id": 0, 00:05:54.031 "enable_zerocopy_send_server": true, 00:05:54.031 "enable_zerocopy_send_client": false, 00:05:54.031 "zerocopy_threshold": 0, 00:05:54.031 "tls_version": 0, 00:05:54.031 "enable_ktls": false 00:05:54.031 } 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "vmd", 00:05:54.031 "config": [] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "accel", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "accel_set_options", 00:05:54.031 "params": { 00:05:54.031 "small_cache_size": 128, 00:05:54.031 "large_cache_size": 16, 00:05:54.031 "task_count": 2048, 00:05:54.031 "sequence_count": 2048, 00:05:54.031 "buf_count": 2048 00:05:54.031 } 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "bdev", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "bdev_set_options", 00:05:54.031 "params": { 00:05:54.031 "bdev_io_pool_size": 65535, 00:05:54.031 "bdev_io_cache_size": 256, 00:05:54.031 "bdev_auto_examine": true, 00:05:54.031 "iobuf_small_cache_size": 128, 00:05:54.031 "iobuf_large_cache_size": 16 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "bdev_raid_set_options", 00:05:54.031 "params": { 00:05:54.031 "process_window_size_kb": 1024 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "bdev_iscsi_set_options", 00:05:54.031 "params": { 00:05:54.031 "timeout_sec": 30 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "bdev_nvme_set_options", 00:05:54.031 "params": { 00:05:54.031 "action_on_timeout": "none", 00:05:54.031 "timeout_us": 0, 00:05:54.031 "timeout_admin_us": 0, 00:05:54.031 "keep_alive_timeout_ms": 10000, 00:05:54.031 "arbitration_burst": 0, 00:05:54.031 "low_priority_weight": 0, 00:05:54.031 "medium_priority_weight": 0, 00:05:54.031 "high_priority_weight": 0, 00:05:54.031 "nvme_adminq_poll_period_us": 10000, 00:05:54.031 "nvme_ioq_poll_period_us": 0, 00:05:54.031 "io_queue_requests": 0, 00:05:54.031 "delay_cmd_submit": true, 00:05:54.031 "transport_retry_count": 4, 00:05:54.031 "bdev_retry_count": 3, 00:05:54.031 "transport_ack_timeout": 0, 00:05:54.031 "ctrlr_loss_timeout_sec": 0, 00:05:54.031 "reconnect_delay_sec": 0, 00:05:54.031 "fast_io_fail_timeout_sec": 0, 00:05:54.031 "disable_auto_failback": false, 00:05:54.031 "generate_uuids": false, 00:05:54.031 "transport_tos": 0, 00:05:54.031 "nvme_error_stat": false, 00:05:54.031 "rdma_srq_size": 0, 00:05:54.031 "io_path_stat": false, 00:05:54.031 "allow_accel_sequence": false, 00:05:54.031 "rdma_max_cq_size": 0, 00:05:54.031 "rdma_cm_event_timeout_ms": 0, 00:05:54.031 "dhchap_digests": [ 00:05:54.031 "sha256", 00:05:54.031 "sha384", 00:05:54.031 "sha512" 00:05:54.031 ], 00:05:54.031 "dhchap_dhgroups": [ 00:05:54.031 "null", 00:05:54.031 "ffdhe2048", 00:05:54.031 "ffdhe3072", 00:05:54.031 "ffdhe4096", 00:05:54.031 "ffdhe6144", 00:05:54.031 "ffdhe8192" 00:05:54.031 ] 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "bdev_nvme_set_hotplug", 00:05:54.031 "params": { 00:05:54.031 "period_us": 100000, 00:05:54.031 "enable": false 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "bdev_wait_for_examine" 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "scsi", 00:05:54.031 "config": null 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "scheduler", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "framework_set_scheduler", 00:05:54.031 "params": { 00:05:54.031 "name": "static" 00:05:54.031 } 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "vhost_scsi", 00:05:54.031 "config": [] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "vhost_blk", 00:05:54.031 "config": [] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "ublk", 00:05:54.031 "config": [] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "nbd", 00:05:54.031 "config": [] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "nvmf", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "nvmf_set_config", 00:05:54.031 "params": { 00:05:54.031 "discovery_filter": "match_any", 00:05:54.031 "admin_cmd_passthru": { 00:05:54.031 "identify_ctrlr": false 00:05:54.031 } 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "nvmf_set_max_subsystems", 00:05:54.031 "params": { 00:05:54.031 "max_subsystems": 1024 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "nvmf_set_crdt", 00:05:54.031 "params": { 00:05:54.031 "crdt1": 0, 00:05:54.031 "crdt2": 0, 00:05:54.031 "crdt3": 0 00:05:54.031 } 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "method": "nvmf_create_transport", 00:05:54.031 "params": { 00:05:54.031 "trtype": "TCP", 00:05:54.031 "max_queue_depth": 128, 00:05:54.031 "max_io_qpairs_per_ctrlr": 127, 00:05:54.031 "in_capsule_data_size": 4096, 00:05:54.031 "max_io_size": 131072, 00:05:54.031 "io_unit_size": 131072, 00:05:54.031 "max_aq_depth": 128, 00:05:54.031 "num_shared_buffers": 511, 00:05:54.031 "buf_cache_size": 4294967295, 00:05:54.031 "dif_insert_or_strip": false, 00:05:54.031 "zcopy": false, 00:05:54.031 "c2h_success": true, 00:05:54.031 "sock_priority": 0, 00:05:54.031 "abort_timeout_sec": 1, 00:05:54.031 "ack_timeout": 0, 00:05:54.031 "data_wr_pool_size": 0 00:05:54.031 } 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 }, 00:05:54.031 { 00:05:54.031 "subsystem": "iscsi", 00:05:54.031 "config": [ 00:05:54.031 { 00:05:54.031 "method": "iscsi_set_options", 00:05:54.031 "params": { 00:05:54.031 "node_base": "iqn.2016-06.io.spdk", 00:05:54.031 "max_sessions": 128, 00:05:54.031 "max_connections_per_session": 2, 00:05:54.031 "max_queue_depth": 64, 00:05:54.031 "default_time2wait": 2, 00:05:54.031 "default_time2retain": 20, 00:05:54.031 "first_burst_length": 8192, 00:05:54.031 "immediate_data": true, 00:05:54.031 "allow_duplicated_isid": false, 00:05:54.031 "error_recovery_level": 0, 00:05:54.031 "nop_timeout": 60, 00:05:54.031 "nop_in_interval": 30, 00:05:54.031 "disable_chap": false, 00:05:54.031 "require_chap": false, 00:05:54.031 "mutual_chap": false, 00:05:54.031 "chap_group": 0, 00:05:54.031 "max_large_datain_per_connection": 64, 00:05:54.031 "max_r2t_per_connection": 4, 00:05:54.031 "pdu_pool_size": 36864, 00:05:54.031 "immediate_data_pool_size": 16384, 00:05:54.031 "data_out_pool_size": 2048 00:05:54.031 } 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 } 00:05:54.031 ] 00:05:54.031 } 00:05:54.031 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.031 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1467129 00:05:54.031 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1467129 ']' 00:05:54.031 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1467129 00:05:54.031 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:54.031 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.032 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467129 00:05:54.032 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.032 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.032 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467129' 00:05:54.032 killing process with pid 1467129 00:05:54.032 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1467129 00:05:54.032 17:57:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1467129 00:05:54.291 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1467407 00:05:54.291 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.291 17:57:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1467407 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1467407 ']' 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1467407 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467407 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467407' 00:05:59.568 killing process with pid 1467407 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1467407 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1467407 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:59.568 00:05:59.568 real 0m6.756s 00:05:59.568 user 0m6.531s 00:05:59.568 sys 0m0.679s 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.568 17:57:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.568 ************************************ 00:05:59.568 END TEST skip_rpc_with_json 00:05:59.568 ************************************ 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.828 17:58:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.828 ************************************ 00:05:59.828 START TEST skip_rpc_with_delay 00:05:59.828 ************************************ 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.828 [2024-07-15 17:58:00.107954] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:59.828 [2024-07-15 17:58:00.108041] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:59.828 00:05:59.828 real 0m0.070s 00:05:59.828 user 0m0.038s 00:05:59.828 sys 0m0.032s 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.828 17:58:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:59.828 ************************************ 00:05:59.828 END TEST skip_rpc_with_delay 00:05:59.828 ************************************ 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:59.828 17:58:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:59.828 17:58:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:59.828 17:58:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.828 17:58:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.828 ************************************ 00:05:59.828 START TEST exit_on_failed_rpc_init 00:05:59.828 ************************************ 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1468335 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1468335 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1468335 ']' 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.828 17:58:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.087 [2024-07-15 17:58:00.260747] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:00.087 [2024-07-15 17:58:00.260792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468335 ] 00:06:00.087 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.087 [2024-07-15 17:58:00.342594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.087 [2024-07-15 17:58:00.412435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:00.655 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:00.914 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:00.914 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.914 [2024-07-15 17:58:01.108303] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:00.914 [2024-07-15 17:58:01.108351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468534 ] 00:06:00.914 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.914 [2024-07-15 17:58:01.188838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.914 [2024-07-15 17:58:01.259559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.914 [2024-07-15 17:58:01.259646] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:00.914 [2024-07-15 17:58:01.259658] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:00.914 [2024-07-15 17:58:01.259666] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1468335 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1468335 ']' 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1468335 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468335 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468335' 00:06:01.172 killing process with pid 1468335 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1468335 00:06:01.172 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1468335 00:06:01.431 00:06:01.431 real 0m1.481s 00:06:01.431 user 0m1.659s 00:06:01.431 sys 0m0.473s 00:06:01.431 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.431 17:58:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.431 ************************************ 00:06:01.431 END TEST exit_on_failed_rpc_init 00:06:01.431 ************************************ 00:06:01.431 17:58:01 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:01.431 17:58:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:01.431 00:06:01.431 real 0m14.129s 00:06:01.431 user 0m13.517s 00:06:01.431 sys 0m1.784s 00:06:01.431 17:58:01 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.431 17:58:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.431 ************************************ 00:06:01.431 END TEST skip_rpc 00:06:01.431 ************************************ 00:06:01.431 17:58:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.431 17:58:01 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:01.431 17:58:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.431 17:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.431 17:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.431 ************************************ 00:06:01.431 START TEST rpc_client 00:06:01.431 ************************************ 00:06:01.431 17:58:01 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:01.690 * Looking for test storage... 00:06:01.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:01.690 17:58:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:01.690 OK 00:06:01.690 17:58:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.690 00:06:01.690 real 0m0.126s 00:06:01.690 user 0m0.054s 00:06:01.690 sys 0m0.082s 00:06:01.690 17:58:01 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.690 17:58:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:01.690 ************************************ 00:06:01.690 END TEST rpc_client 00:06:01.690 ************************************ 00:06:01.690 17:58:01 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.690 17:58:01 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.690 17:58:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.690 17:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.690 17:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:01.690 ************************************ 00:06:01.690 START TEST json_config 00:06:01.690 ************************************ 00:06:01.690 17:58:02 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:01.690 17:58:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:01.949 17:58:02 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.949 17:58:02 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.949 17:58:02 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.949 17:58:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.949 17:58:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.949 17:58:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.949 17:58:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:01.949 17:58:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@47 -- # : 0 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.949 17:58:02 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:01.949 INFO: JSON configuration test init 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:01.949 17:58:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.949 17:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:01.949 17:58:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.949 17:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.949 17:58:02 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:01.949 17:58:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.949 17:58:02 json_config -- json_config/common.sh@10 -- # shift 00:06:01.949 17:58:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.949 17:58:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.949 17:58:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.949 17:58:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.949 17:58:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.950 17:58:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1468902 00:06:01.950 17:58:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.950 Waiting for target to run... 00:06:01.950 17:58:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:01.950 17:58:02 json_config -- json_config/common.sh@25 -- # waitforlisten 1468902 /var/tmp/spdk_tgt.sock 00:06:01.950 17:58:02 json_config -- common/autotest_common.sh@829 -- # '[' -z 1468902 ']' 00:06:01.950 17:58:02 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.950 17:58:02 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.950 17:58:02 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.950 17:58:02 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.950 17:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.950 [2024-07-15 17:58:02.185210] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:01.950 [2024-07-15 17:58:02.185262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468902 ] 00:06:01.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.525 [2024-07-15 17:58:02.634589] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.525 [2024-07-15 17:58:02.715816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.786 17:58:02 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.786 17:58:02 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:02.786 17:58:02 json_config -- json_config/common.sh@26 -- # echo '' 00:06:02.786 00:06:02.786 17:58:02 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:02.786 17:58:02 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:02.786 17:58:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.786 17:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.786 17:58:02 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:02.786 17:58:02 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:02.786 17:58:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.786 17:58:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.786 17:58:03 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:02.786 17:58:03 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:02.786 17:58:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:06.143 17:58:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:06:06.143 17:58:06 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:06.143 17:58:06 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:06:06.143 17:58:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:14.267 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:14.267 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:14.267 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:14.267 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@58 -- # uname 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:14.267 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:14.267 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:14.267 altname enp217s0f0np0 00:06:14.267 altname ens818f0np0 00:06:14.267 inet 192.168.100.8/24 scope global mlx_0_0 00:06:14.267 valid_lft forever preferred_lft forever 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:14.267 17:58:14 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:14.268 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:14.268 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:14.268 altname enp217s0f1np1 00:06:14.268 altname ens818f1np1 00:06:14.268 inet 192.168.100.9/24 scope global mlx_0_1 00:06:14.268 valid_lft forever preferred_lft forever 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@422 -- # return 0 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:14.268 17:58:14 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:14.527 192.168.100.9' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:14.527 192.168.100.9' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:14.527 192.168.100.9' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:14.527 17:58:14 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:14.527 17:58:14 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:06:14.527 17:58:14 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:14.527 17:58:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:14.527 MallocForNvmf0 00:06:14.786 17:58:14 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:14.786 17:58:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:14.786 MallocForNvmf1 00:06:14.786 17:58:15 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:14.786 17:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:15.045 [2024-07-15 17:58:15.246726] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:15.045 [2024-07-15 17:58:15.278950] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x885f20/0x9b2dc0) succeed. 00:06:15.045 [2024-07-15 17:58:15.290873] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x888110/0x892c80) succeed. 00:06:15.045 17:58:15 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.045 17:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.304 17:58:15 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:15.304 17:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:15.304 17:58:15 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:15.304 17:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:15.562 17:58:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:15.562 17:58:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:15.820 [2024-07-15 17:58:16.006896] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:15.820 17:58:16 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:15.820 17:58:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.820 17:58:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.820 17:58:16 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:15.820 17:58:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:15.820 17:58:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:15.820 17:58:16 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:15.820 17:58:16 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:15.820 17:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:16.079 MallocBdevForConfigChangeCheck 00:06:16.079 17:58:16 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:16.079 17:58:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.079 17:58:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.079 17:58:16 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:16.079 17:58:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.338 17:58:16 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:16.338 INFO: shutting down applications... 00:06:16.338 17:58:16 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:16.338 17:58:16 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:16.338 17:58:16 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:16.338 17:58:16 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:18.870 Calling clear_iscsi_subsystem 00:06:18.870 Calling clear_nvmf_subsystem 00:06:18.870 Calling clear_nbd_subsystem 00:06:18.870 Calling clear_ublk_subsystem 00:06:18.870 Calling clear_vhost_blk_subsystem 00:06:18.870 Calling clear_vhost_scsi_subsystem 00:06:18.870 Calling clear_bdev_subsystem 00:06:18.870 17:58:19 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:18.870 17:58:19 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:18.870 17:58:19 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:18.870 17:58:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:18.870 17:58:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:18.870 17:58:19 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:19.129 17:58:19 json_config -- json_config/json_config.sh@345 -- # break 00:06:19.129 17:58:19 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:19.129 17:58:19 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:19.129 17:58:19 json_config -- json_config/common.sh@31 -- # local app=target 00:06:19.129 17:58:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:19.129 17:58:19 json_config -- json_config/common.sh@35 -- # [[ -n 1468902 ]] 00:06:19.129 17:58:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1468902 00:06:19.129 17:58:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:19.129 17:58:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.129 17:58:19 json_config -- json_config/common.sh@41 -- # kill -0 1468902 00:06:19.129 17:58:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.696 17:58:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.696 17:58:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.696 17:58:19 json_config -- json_config/common.sh@41 -- # kill -0 1468902 00:06:19.696 17:58:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:19.696 17:58:19 json_config -- json_config/common.sh@43 -- # break 00:06:19.696 17:58:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:19.696 17:58:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:19.696 SPDK target shutdown done 00:06:19.696 17:58:19 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:19.696 INFO: relaunching applications... 00:06:19.696 17:58:19 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.696 17:58:19 json_config -- json_config/common.sh@9 -- # local app=target 00:06:19.696 17:58:19 json_config -- json_config/common.sh@10 -- # shift 00:06:19.696 17:58:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.696 17:58:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.696 17:58:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.696 17:58:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.696 17:58:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.696 17:58:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1474520 00:06:19.696 17:58:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.696 Waiting for target to run... 00:06:19.696 17:58:19 json_config -- json_config/common.sh@25 -- # waitforlisten 1474520 /var/tmp/spdk_tgt.sock 00:06:19.696 17:58:19 json_config -- common/autotest_common.sh@829 -- # '[' -z 1474520 ']' 00:06:19.696 17:58:19 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.696 17:58:19 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.696 17:58:19 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.696 17:58:19 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.696 17:58:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.696 17:58:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.696 [2024-07-15 17:58:20.044509] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:19.696 [2024-07-15 17:58:20.044593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474520 ] 00:06:19.696 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.955 [2024-07-15 17:58:20.353485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.213 [2024-07-15 17:58:20.417269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.500 [2024-07-15 17:58:23.476241] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1338e90/0x12bf3c0) succeed. 00:06:23.500 [2024-07-15 17:58:23.487087] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1337e80/0x123f380) succeed. 00:06:23.500 [2024-07-15 17:58:23.536336] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:24.069 17:58:24 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.069 17:58:24 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:24.069 17:58:24 json_config -- json_config/common.sh@26 -- # echo '' 00:06:24.069 00:06:24.069 17:58:24 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:24.069 17:58:24 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:24.069 INFO: Checking if target configuration is the same... 00:06:24.069 17:58:24 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.069 17:58:24 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:24.069 17:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.069 + '[' 2 -ne 2 ']' 00:06:24.069 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.069 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:24.069 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:24.069 +++ basename /dev/fd/62 00:06:24.069 ++ mktemp /tmp/62.XXX 00:06:24.069 + tmp_file_1=/tmp/62.P1J 00:06:24.069 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.069 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.069 + tmp_file_2=/tmp/spdk_tgt_config.json.KgZ 00:06:24.069 + ret=0 00:06:24.069 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.329 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.329 + diff -u /tmp/62.P1J /tmp/spdk_tgt_config.json.KgZ 00:06:24.329 + echo 'INFO: JSON config files are the same' 00:06:24.329 INFO: JSON config files are the same 00:06:24.329 + rm /tmp/62.P1J /tmp/spdk_tgt_config.json.KgZ 00:06:24.329 + exit 0 00:06:24.329 17:58:24 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:24.329 17:58:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:24.329 INFO: changing configuration and checking if this can be detected... 00:06:24.329 17:58:24 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.329 17:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:24.329 17:58:24 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.329 17:58:24 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:24.329 17:58:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.329 + '[' 2 -ne 2 ']' 00:06:24.588 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:24.588 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:24.588 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:24.588 +++ basename /dev/fd/62 00:06:24.588 ++ mktemp /tmp/62.XXX 00:06:24.588 + tmp_file_1=/tmp/62.aZb 00:06:24.588 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:24.588 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.588 + tmp_file_2=/tmp/spdk_tgt_config.json.yoY 00:06:24.588 + ret=0 00:06:24.588 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.848 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:24.848 + diff -u /tmp/62.aZb /tmp/spdk_tgt_config.json.yoY 00:06:24.848 + ret=1 00:06:24.848 + echo '=== Start of file: /tmp/62.aZb ===' 00:06:24.848 + cat /tmp/62.aZb 00:06:24.848 + echo '=== End of file: /tmp/62.aZb ===' 00:06:24.848 + echo '' 00:06:24.848 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yoY ===' 00:06:24.848 + cat /tmp/spdk_tgt_config.json.yoY 00:06:24.848 + echo '=== End of file: /tmp/spdk_tgt_config.json.yoY ===' 00:06:24.848 + echo '' 00:06:24.848 + rm /tmp/62.aZb /tmp/spdk_tgt_config.json.yoY 00:06:24.848 + exit 1 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:24.848 INFO: configuration change detected. 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:24.848 17:58:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.848 17:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@317 -- # [[ -n 1474520 ]] 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:24.848 17:58:25 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:24.848 17:58:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.848 17:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.849 17:58:25 json_config -- json_config/json_config.sh@323 -- # killprocess 1474520 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@948 -- # '[' -z 1474520 ']' 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@952 -- # kill -0 1474520 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@953 -- # uname 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474520 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474520' 00:06:24.849 killing process with pid 1474520 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@967 -- # kill 1474520 00:06:24.849 17:58:25 json_config -- common/autotest_common.sh@972 -- # wait 1474520 00:06:27.407 17:58:27 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:27.407 17:58:27 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:27.407 17:58:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.407 17:58:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.407 17:58:27 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:27.407 17:58:27 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:27.407 INFO: Success 00:06:27.407 17:58:27 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@117 -- # sync 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:27.407 17:58:27 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:27.407 00:06:27.407 real 0m25.736s 00:06:27.407 user 0m28.434s 00:06:27.407 sys 0m8.670s 00:06:27.407 17:58:27 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.407 17:58:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.407 ************************************ 00:06:27.407 END TEST json_config 00:06:27.407 ************************************ 00:06:27.407 17:58:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.407 17:58:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.407 17:58:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.407 17:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.407 17:58:27 -- common/autotest_common.sh@10 -- # set +x 00:06:27.667 ************************************ 00:06:27.667 START TEST json_config_extra_key 00:06:27.667 ************************************ 00:06:27.667 17:58:27 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:27.667 17:58:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.667 17:58:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.667 17:58:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.667 17:58:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.667 17:58:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.667 17:58:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.667 17:58:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:27.667 17:58:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.667 17:58:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:27.667 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:27.668 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:27.668 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.668 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:27.668 INFO: launching applications... 00:06:27.668 17:58:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1476079 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.668 Waiting for target to run... 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1476079 /var/tmp/spdk_tgt.sock 00:06:27.668 17:58:27 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1476079 ']' 00:06:27.668 17:58:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.668 17:58:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:27.668 17:58:27 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.668 17:58:27 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.668 17:58:27 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.668 17:58:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.668 [2024-07-15 17:58:27.993143] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:27.668 [2024-07-15 17:58:27.993203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476079 ] 00:06:27.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.927 [2024-07-15 17:58:28.290607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.186 [2024-07-15 17:58:28.353782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.445 17:58:28 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.445 17:58:28 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:28.445 00:06:28.445 17:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:28.445 INFO: shutting down applications... 00:06:28.445 17:58:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1476079 ]] 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1476079 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1476079 00:06:28.445 17:58:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1476079 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.014 17:58:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.014 SPDK target shutdown done 00:06:29.014 17:58:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:29.014 Success 00:06:29.014 00:06:29.014 real 0m1.461s 00:06:29.014 user 0m1.167s 00:06:29.014 sys 0m0.437s 00:06:29.014 17:58:29 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.014 17:58:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:29.014 ************************************ 00:06:29.014 END TEST json_config_extra_key 00:06:29.014 ************************************ 00:06:29.014 17:58:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.014 17:58:29 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.014 17:58:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.014 17:58:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.014 17:58:29 -- common/autotest_common.sh@10 -- # set +x 00:06:29.014 ************************************ 00:06:29.014 START TEST alias_rpc 00:06:29.014 ************************************ 00:06:29.015 17:58:29 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.274 * Looking for test storage... 00:06:29.274 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:29.274 17:58:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.274 17:58:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1476390 00:06:29.274 17:58:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:29.274 17:58:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1476390 00:06:29.274 17:58:29 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1476390 ']' 00:06:29.274 17:58:29 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.274 17:58:29 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.274 17:58:29 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.274 17:58:29 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.274 17:58:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.274 [2024-07-15 17:58:29.535115] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:29.274 [2024-07-15 17:58:29.535178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476390 ] 00:06:29.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.274 [2024-07-15 17:58:29.618990] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.533 [2024-07-15 17:58:29.692260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.102 17:58:30 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.102 17:58:30 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:30.103 17:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:30.362 17:58:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1476390 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1476390 ']' 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1476390 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1476390 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1476390' 00:06:30.362 killing process with pid 1476390 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@967 -- # kill 1476390 00:06:30.362 17:58:30 alias_rpc -- common/autotest_common.sh@972 -- # wait 1476390 00:06:30.621 00:06:30.621 real 0m1.508s 00:06:30.621 user 0m1.610s 00:06:30.621 sys 0m0.447s 00:06:30.621 17:58:30 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.621 17:58:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.621 ************************************ 00:06:30.621 END TEST alias_rpc 00:06:30.621 ************************************ 00:06:30.621 17:58:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:30.621 17:58:30 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:30.621 17:58:30 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:30.621 17:58:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.621 17:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.621 17:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:30.621 ************************************ 00:06:30.621 START TEST spdkcli_tcp 00:06:30.621 ************************************ 00:06:30.621 17:58:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:30.880 * Looking for test storage... 00:06:30.880 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:30.880 17:58:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:30.880 17:58:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1476720 00:06:30.880 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1476720 00:06:30.881 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:30.881 17:58:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1476720 ']' 00:06:30.881 17:58:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.881 17:58:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.881 17:58:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.881 17:58:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.881 17:58:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.881 [2024-07-15 17:58:31.113903] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:30.881 [2024-07-15 17:58:31.113961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476720 ] 00:06:30.881 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.881 [2024-07-15 17:58:31.197349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.881 [2024-07-15 17:58:31.271953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.881 [2024-07-15 17:58:31.271956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.819 17:58:31 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.819 17:58:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:31.819 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1476847 00:06:31.819 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:31.819 17:58:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:31.819 [ 00:06:31.819 "bdev_malloc_delete", 00:06:31.819 "bdev_malloc_create", 00:06:31.819 "bdev_null_resize", 00:06:31.819 "bdev_null_delete", 00:06:31.819 "bdev_null_create", 00:06:31.819 "bdev_nvme_cuse_unregister", 00:06:31.819 "bdev_nvme_cuse_register", 00:06:31.819 "bdev_opal_new_user", 00:06:31.819 "bdev_opal_set_lock_state", 00:06:31.819 "bdev_opal_delete", 00:06:31.819 "bdev_opal_get_info", 00:06:31.819 "bdev_opal_create", 00:06:31.819 "bdev_nvme_opal_revert", 00:06:31.819 "bdev_nvme_opal_init", 00:06:31.819 "bdev_nvme_send_cmd", 00:06:31.819 "bdev_nvme_get_path_iostat", 00:06:31.819 "bdev_nvme_get_mdns_discovery_info", 00:06:31.819 "bdev_nvme_stop_mdns_discovery", 00:06:31.819 "bdev_nvme_start_mdns_discovery", 00:06:31.819 "bdev_nvme_set_multipath_policy", 00:06:31.819 "bdev_nvme_set_preferred_path", 00:06:31.819 "bdev_nvme_get_io_paths", 00:06:31.819 "bdev_nvme_remove_error_injection", 00:06:31.819 "bdev_nvme_add_error_injection", 00:06:31.819 "bdev_nvme_get_discovery_info", 00:06:31.819 "bdev_nvme_stop_discovery", 00:06:31.819 "bdev_nvme_start_discovery", 00:06:31.819 "bdev_nvme_get_controller_health_info", 00:06:31.819 "bdev_nvme_disable_controller", 00:06:31.819 "bdev_nvme_enable_controller", 00:06:31.819 "bdev_nvme_reset_controller", 00:06:31.819 "bdev_nvme_get_transport_statistics", 00:06:31.819 "bdev_nvme_apply_firmware", 00:06:31.819 "bdev_nvme_detach_controller", 00:06:31.819 "bdev_nvme_get_controllers", 00:06:31.819 "bdev_nvme_attach_controller", 00:06:31.819 "bdev_nvme_set_hotplug", 00:06:31.819 "bdev_nvme_set_options", 00:06:31.819 "bdev_passthru_delete", 00:06:31.819 "bdev_passthru_create", 00:06:31.819 "bdev_lvol_set_parent_bdev", 00:06:31.819 "bdev_lvol_set_parent", 00:06:31.819 "bdev_lvol_check_shallow_copy", 00:06:31.819 "bdev_lvol_start_shallow_copy", 00:06:31.819 "bdev_lvol_grow_lvstore", 00:06:31.819 "bdev_lvol_get_lvols", 00:06:31.819 "bdev_lvol_get_lvstores", 00:06:31.819 "bdev_lvol_delete", 00:06:31.819 "bdev_lvol_set_read_only", 00:06:31.819 "bdev_lvol_resize", 00:06:31.819 "bdev_lvol_decouple_parent", 00:06:31.819 "bdev_lvol_inflate", 00:06:31.819 "bdev_lvol_rename", 00:06:31.819 "bdev_lvol_clone_bdev", 00:06:31.819 "bdev_lvol_clone", 00:06:31.819 "bdev_lvol_snapshot", 00:06:31.819 "bdev_lvol_create", 00:06:31.819 "bdev_lvol_delete_lvstore", 00:06:31.819 "bdev_lvol_rename_lvstore", 00:06:31.819 "bdev_lvol_create_lvstore", 00:06:31.819 "bdev_raid_set_options", 00:06:31.819 "bdev_raid_remove_base_bdev", 00:06:31.819 "bdev_raid_add_base_bdev", 00:06:31.819 "bdev_raid_delete", 00:06:31.819 "bdev_raid_create", 00:06:31.819 "bdev_raid_get_bdevs", 00:06:31.819 "bdev_error_inject_error", 00:06:31.819 "bdev_error_delete", 00:06:31.819 "bdev_error_create", 00:06:31.819 "bdev_split_delete", 00:06:31.819 "bdev_split_create", 00:06:31.819 "bdev_delay_delete", 00:06:31.819 "bdev_delay_create", 00:06:31.819 "bdev_delay_update_latency", 00:06:31.819 "bdev_zone_block_delete", 00:06:31.819 "bdev_zone_block_create", 00:06:31.819 "blobfs_create", 00:06:31.819 "blobfs_detect", 00:06:31.819 "blobfs_set_cache_size", 00:06:31.819 "bdev_aio_delete", 00:06:31.819 "bdev_aio_rescan", 00:06:31.819 "bdev_aio_create", 00:06:31.819 "bdev_ftl_set_property", 00:06:31.819 "bdev_ftl_get_properties", 00:06:31.819 "bdev_ftl_get_stats", 00:06:31.819 "bdev_ftl_unmap", 00:06:31.819 "bdev_ftl_unload", 00:06:31.819 "bdev_ftl_delete", 00:06:31.819 "bdev_ftl_load", 00:06:31.819 "bdev_ftl_create", 00:06:31.819 "bdev_virtio_attach_controller", 00:06:31.819 "bdev_virtio_scsi_get_devices", 00:06:31.819 "bdev_virtio_detach_controller", 00:06:31.819 "bdev_virtio_blk_set_hotplug", 00:06:31.819 "bdev_iscsi_delete", 00:06:31.820 "bdev_iscsi_create", 00:06:31.820 "bdev_iscsi_set_options", 00:06:31.820 "accel_error_inject_error", 00:06:31.820 "ioat_scan_accel_module", 00:06:31.820 "dsa_scan_accel_module", 00:06:31.820 "iaa_scan_accel_module", 00:06:31.820 "keyring_file_remove_key", 00:06:31.820 "keyring_file_add_key", 00:06:31.820 "keyring_linux_set_options", 00:06:31.820 "iscsi_get_histogram", 00:06:31.820 "iscsi_enable_histogram", 00:06:31.820 "iscsi_set_options", 00:06:31.820 "iscsi_get_auth_groups", 00:06:31.820 "iscsi_auth_group_remove_secret", 00:06:31.820 "iscsi_auth_group_add_secret", 00:06:31.820 "iscsi_delete_auth_group", 00:06:31.820 "iscsi_create_auth_group", 00:06:31.820 "iscsi_set_discovery_auth", 00:06:31.820 "iscsi_get_options", 00:06:31.820 "iscsi_target_node_request_logout", 00:06:31.820 "iscsi_target_node_set_redirect", 00:06:31.820 "iscsi_target_node_set_auth", 00:06:31.820 "iscsi_target_node_add_lun", 00:06:31.820 "iscsi_get_stats", 00:06:31.820 "iscsi_get_connections", 00:06:31.820 "iscsi_portal_group_set_auth", 00:06:31.820 "iscsi_start_portal_group", 00:06:31.820 "iscsi_delete_portal_group", 00:06:31.820 "iscsi_create_portal_group", 00:06:31.820 "iscsi_get_portal_groups", 00:06:31.820 "iscsi_delete_target_node", 00:06:31.820 "iscsi_target_node_remove_pg_ig_maps", 00:06:31.820 "iscsi_target_node_add_pg_ig_maps", 00:06:31.820 "iscsi_create_target_node", 00:06:31.820 "iscsi_get_target_nodes", 00:06:31.820 "iscsi_delete_initiator_group", 00:06:31.820 "iscsi_initiator_group_remove_initiators", 00:06:31.820 "iscsi_initiator_group_add_initiators", 00:06:31.820 "iscsi_create_initiator_group", 00:06:31.820 "iscsi_get_initiator_groups", 00:06:31.820 "nvmf_set_crdt", 00:06:31.820 "nvmf_set_config", 00:06:31.820 "nvmf_set_max_subsystems", 00:06:31.820 "nvmf_stop_mdns_prr", 00:06:31.820 "nvmf_publish_mdns_prr", 00:06:31.820 "nvmf_subsystem_get_listeners", 00:06:31.820 "nvmf_subsystem_get_qpairs", 00:06:31.820 "nvmf_subsystem_get_controllers", 00:06:31.820 "nvmf_get_stats", 00:06:31.820 "nvmf_get_transports", 00:06:31.820 "nvmf_create_transport", 00:06:31.820 "nvmf_get_targets", 00:06:31.820 "nvmf_delete_target", 00:06:31.820 "nvmf_create_target", 00:06:31.820 "nvmf_subsystem_allow_any_host", 00:06:31.820 "nvmf_subsystem_remove_host", 00:06:31.820 "nvmf_subsystem_add_host", 00:06:31.820 "nvmf_ns_remove_host", 00:06:31.820 "nvmf_ns_add_host", 00:06:31.820 "nvmf_subsystem_remove_ns", 00:06:31.820 "nvmf_subsystem_add_ns", 00:06:31.820 "nvmf_subsystem_listener_set_ana_state", 00:06:31.820 "nvmf_discovery_get_referrals", 00:06:31.820 "nvmf_discovery_remove_referral", 00:06:31.820 "nvmf_discovery_add_referral", 00:06:31.820 "nvmf_subsystem_remove_listener", 00:06:31.820 "nvmf_subsystem_add_listener", 00:06:31.820 "nvmf_delete_subsystem", 00:06:31.820 "nvmf_create_subsystem", 00:06:31.820 "nvmf_get_subsystems", 00:06:31.820 "env_dpdk_get_mem_stats", 00:06:31.820 "nbd_get_disks", 00:06:31.820 "nbd_stop_disk", 00:06:31.820 "nbd_start_disk", 00:06:31.820 "ublk_recover_disk", 00:06:31.820 "ublk_get_disks", 00:06:31.820 "ublk_stop_disk", 00:06:31.820 "ublk_start_disk", 00:06:31.820 "ublk_destroy_target", 00:06:31.820 "ublk_create_target", 00:06:31.820 "virtio_blk_create_transport", 00:06:31.820 "virtio_blk_get_transports", 00:06:31.820 "vhost_controller_set_coalescing", 00:06:31.820 "vhost_get_controllers", 00:06:31.820 "vhost_delete_controller", 00:06:31.820 "vhost_create_blk_controller", 00:06:31.820 "vhost_scsi_controller_remove_target", 00:06:31.820 "vhost_scsi_controller_add_target", 00:06:31.820 "vhost_start_scsi_controller", 00:06:31.820 "vhost_create_scsi_controller", 00:06:31.820 "thread_set_cpumask", 00:06:31.820 "framework_get_governor", 00:06:31.820 "framework_get_scheduler", 00:06:31.820 "framework_set_scheduler", 00:06:31.820 "framework_get_reactors", 00:06:31.820 "thread_get_io_channels", 00:06:31.820 "thread_get_pollers", 00:06:31.820 "thread_get_stats", 00:06:31.820 "framework_monitor_context_switch", 00:06:31.820 "spdk_kill_instance", 00:06:31.820 "log_enable_timestamps", 00:06:31.820 "log_get_flags", 00:06:31.820 "log_clear_flag", 00:06:31.820 "log_set_flag", 00:06:31.820 "log_get_level", 00:06:31.820 "log_set_level", 00:06:31.820 "log_get_print_level", 00:06:31.820 "log_set_print_level", 00:06:31.820 "framework_enable_cpumask_locks", 00:06:31.820 "framework_disable_cpumask_locks", 00:06:31.820 "framework_wait_init", 00:06:31.820 "framework_start_init", 00:06:31.820 "scsi_get_devices", 00:06:31.820 "bdev_get_histogram", 00:06:31.820 "bdev_enable_histogram", 00:06:31.820 "bdev_set_qos_limit", 00:06:31.820 "bdev_set_qd_sampling_period", 00:06:31.820 "bdev_get_bdevs", 00:06:31.820 "bdev_reset_iostat", 00:06:31.820 "bdev_get_iostat", 00:06:31.820 "bdev_examine", 00:06:31.820 "bdev_wait_for_examine", 00:06:31.820 "bdev_set_options", 00:06:31.820 "notify_get_notifications", 00:06:31.820 "notify_get_types", 00:06:31.820 "accel_get_stats", 00:06:31.820 "accel_set_options", 00:06:31.820 "accel_set_driver", 00:06:31.820 "accel_crypto_key_destroy", 00:06:31.820 "accel_crypto_keys_get", 00:06:31.820 "accel_crypto_key_create", 00:06:31.820 "accel_assign_opc", 00:06:31.820 "accel_get_module_info", 00:06:31.820 "accel_get_opc_assignments", 00:06:31.820 "vmd_rescan", 00:06:31.820 "vmd_remove_device", 00:06:31.820 "vmd_enable", 00:06:31.820 "sock_get_default_impl", 00:06:31.820 "sock_set_default_impl", 00:06:31.820 "sock_impl_set_options", 00:06:31.820 "sock_impl_get_options", 00:06:31.820 "iobuf_get_stats", 00:06:31.820 "iobuf_set_options", 00:06:31.820 "framework_get_pci_devices", 00:06:31.820 "framework_get_config", 00:06:31.820 "framework_get_subsystems", 00:06:31.820 "trace_get_info", 00:06:31.820 "trace_get_tpoint_group_mask", 00:06:31.820 "trace_disable_tpoint_group", 00:06:31.820 "trace_enable_tpoint_group", 00:06:31.820 "trace_clear_tpoint_mask", 00:06:31.820 "trace_set_tpoint_mask", 00:06:31.820 "keyring_get_keys", 00:06:31.820 "spdk_get_version", 00:06:31.820 "rpc_get_methods" 00:06:31.820 ] 00:06:31.820 17:58:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:31.820 17:58:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:31.820 17:58:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1476720 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1476720 ']' 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1476720 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1476720 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1476720' 00:06:31.820 killing process with pid 1476720 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1476720 00:06:31.820 17:58:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1476720 00:06:32.389 00:06:32.389 real 0m1.518s 00:06:32.389 user 0m2.762s 00:06:32.389 sys 0m0.498s 00:06:32.389 17:58:32 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.389 17:58:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.389 ************************************ 00:06:32.389 END TEST spdkcli_tcp 00:06:32.389 ************************************ 00:06:32.389 17:58:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:32.389 17:58:32 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.389 17:58:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.389 17:58:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.389 17:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:32.389 ************************************ 00:06:32.390 START TEST dpdk_mem_utility 00:06:32.390 ************************************ 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.390 * Looking for test storage... 00:06:32.390 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:32.390 17:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:32.390 17:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1477072 00:06:32.390 17:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1477072 00:06:32.390 17:58:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1477072 ']' 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.390 17:58:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.390 [2024-07-15 17:58:32.700903] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:32.390 [2024-07-15 17:58:32.700962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477072 ] 00:06:32.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.390 [2024-07-15 17:58:32.783601] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.649 [2024-07-15 17:58:32.857201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:33.225 17:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.225 17:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.225 { 00:06:33.225 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.225 } 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.225 17:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:33.225 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:33.225 1 heaps totaling size 814.000000 MiB 00:06:33.225 size: 814.000000 MiB heap id: 0 00:06:33.225 end heaps---------- 00:06:33.225 8 mempools totaling size 598.116089 MiB 00:06:33.225 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.225 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.225 size: 84.521057 MiB name: bdev_io_1477072 00:06:33.225 size: 51.011292 MiB name: evtpool_1477072 00:06:33.225 size: 50.003479 MiB name: msgpool_1477072 00:06:33.225 size: 21.763794 MiB name: PDU_Pool 00:06:33.225 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.225 size: 0.026123 MiB name: Session_Pool 00:06:33.225 end mempools------- 00:06:33.225 6 memzones totaling size 4.142822 MiB 00:06:33.225 size: 1.000366 MiB name: RG_ring_0_1477072 00:06:33.225 size: 1.000366 MiB name: RG_ring_1_1477072 00:06:33.225 size: 1.000366 MiB name: RG_ring_4_1477072 00:06:33.225 size: 1.000366 MiB name: RG_ring_5_1477072 00:06:33.225 size: 0.125366 MiB name: RG_ring_2_1477072 00:06:33.225 size: 0.015991 MiB name: RG_ring_3_1477072 00:06:33.225 end memzones------- 00:06:33.225 17:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.225 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:33.225 list of free elements. size: 12.519348 MiB 00:06:33.225 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:33.225 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:33.225 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:33.225 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:33.225 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:33.225 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:33.225 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:33.225 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:33.225 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:33.225 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:33.225 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:33.225 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:33.225 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:33.225 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:33.225 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:33.225 list of standard malloc elements. size: 199.218079 MiB 00:06:33.225 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:33.225 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:33.225 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:33.225 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:33.225 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:33.225 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:33.225 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:33.225 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:33.225 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:33.225 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:33.225 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:33.225 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:33.225 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:33.225 list of memzone associated elements. size: 602.262573 MiB 00:06:33.225 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:33.225 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.225 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:33.225 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.225 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:33.225 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1477072_0 00:06:33.225 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:33.225 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1477072_0 00:06:33.225 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:33.225 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1477072_0 00:06:33.225 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:33.225 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.225 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:33.225 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.225 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:33.225 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1477072 00:06:33.225 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:33.225 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1477072 00:06:33.225 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:33.225 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1477072 00:06:33.225 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:33.225 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.225 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:33.225 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.225 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:33.225 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.225 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:33.225 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.225 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:33.225 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1477072 00:06:33.225 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:33.225 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1477072 00:06:33.225 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:33.225 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1477072 00:06:33.225 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:33.225 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1477072 00:06:33.225 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:33.225 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1477072 00:06:33.225 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:33.225 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.225 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:33.225 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.225 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:33.225 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.225 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:33.225 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1477072 00:06:33.225 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:33.225 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.225 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:33.225 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.225 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:33.225 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1477072 00:06:33.225 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:33.225 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.225 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:33.225 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1477072 00:06:33.225 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:33.225 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1477072 00:06:33.225 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:33.225 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.225 17:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.225 17:58:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1477072 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1477072 ']' 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1477072 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.225 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1477072 00:06:33.484 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.484 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.484 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1477072' 00:06:33.484 killing process with pid 1477072 00:06:33.484 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1477072 00:06:33.485 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1477072 00:06:33.743 00:06:33.743 real 0m1.411s 00:06:33.743 user 0m1.456s 00:06:33.743 sys 0m0.447s 00:06:33.743 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.743 17:58:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.743 ************************************ 00:06:33.743 END TEST dpdk_mem_utility 00:06:33.743 ************************************ 00:06:33.743 17:58:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.743 17:58:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:33.743 17:58:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.743 17:58:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.743 17:58:34 -- common/autotest_common.sh@10 -- # set +x 00:06:33.743 ************************************ 00:06:33.743 START TEST event 00:06:33.743 ************************************ 00:06:33.743 17:58:34 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:34.003 * Looking for test storage... 00:06:34.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:34.003 17:58:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:34.003 17:58:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.003 17:58:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.003 17:58:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:34.003 17:58:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.003 17:58:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.003 ************************************ 00:06:34.003 START TEST event_perf 00:06:34.003 ************************************ 00:06:34.003 17:58:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.003 Running I/O for 1 seconds...[2024-07-15 17:58:34.221062] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:34.003 [2024-07-15 17:58:34.221138] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477406 ] 00:06:34.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.003 [2024-07-15 17:58:34.305716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.003 [2024-07-15 17:58:34.379521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.003 [2024-07-15 17:58:34.379614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.003 [2024-07-15 17:58:34.379700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.003 [2024-07-15 17:58:34.379702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.381 Running I/O for 1 seconds... 00:06:35.381 lcore 0: 219849 00:06:35.381 lcore 1: 219850 00:06:35.381 lcore 2: 219849 00:06:35.381 lcore 3: 219850 00:06:35.381 done. 00:06:35.381 00:06:35.381 real 0m1.245s 00:06:35.381 user 0m4.141s 00:06:35.381 sys 0m0.102s 00:06:35.381 17:58:35 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.381 17:58:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.381 ************************************ 00:06:35.381 END TEST event_perf 00:06:35.381 ************************************ 00:06:35.381 17:58:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.381 17:58:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.381 17:58:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:35.381 17:58:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.381 17:58:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.381 ************************************ 00:06:35.381 START TEST event_reactor 00:06:35.381 ************************************ 00:06:35.381 17:58:35 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:35.381 [2024-07-15 17:58:35.547568] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:35.382 [2024-07-15 17:58:35.547649] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477587 ] 00:06:35.382 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.382 [2024-07-15 17:58:35.633624] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.382 [2024-07-15 17:58:35.705907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.759 test_start 00:06:36.759 oneshot 00:06:36.759 tick 100 00:06:36.759 tick 100 00:06:36.759 tick 250 00:06:36.759 tick 100 00:06:36.759 tick 100 00:06:36.759 tick 100 00:06:36.759 tick 250 00:06:36.759 tick 500 00:06:36.759 tick 100 00:06:36.759 tick 100 00:06:36.759 tick 250 00:06:36.759 tick 100 00:06:36.759 tick 100 00:06:36.759 test_end 00:06:36.759 00:06:36.759 real 0m1.245s 00:06:36.759 user 0m1.136s 00:06:36.759 sys 0m0.104s 00:06:36.759 17:58:36 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.759 17:58:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 ************************************ 00:06:36.759 END TEST event_reactor 00:06:36.759 ************************************ 00:06:36.759 17:58:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.759 17:58:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.759 17:58:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:36.759 17:58:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.759 17:58:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.759 ************************************ 00:06:36.759 START TEST event_reactor_perf 00:06:36.759 ************************************ 00:06:36.759 17:58:36 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:36.759 [2024-07-15 17:58:36.857498] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:36.759 [2024-07-15 17:58:36.857554] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477822 ] 00:06:36.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.759 [2024-07-15 17:58:36.939357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.759 [2024-07-15 17:58:37.009147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.698 test_start 00:06:37.698 test_end 00:06:37.698 Performance: 529229 events per second 00:06:37.698 00:06:37.698 real 0m1.227s 00:06:37.698 user 0m1.128s 00:06:37.698 sys 0m0.094s 00:06:37.698 17:58:38 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.698 17:58:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.698 ************************************ 00:06:37.698 END TEST event_reactor_perf 00:06:37.698 ************************************ 00:06:37.958 17:58:38 event -- common/autotest_common.sh@1142 -- # return 0 00:06:37.958 17:58:38 event -- event/event.sh@49 -- # uname -s 00:06:37.958 17:58:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:37.958 17:58:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:37.958 17:58:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.958 17:58:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.958 17:58:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.958 ************************************ 00:06:37.958 START TEST event_scheduler 00:06:37.958 ************************************ 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:37.958 * Looking for test storage... 00:06:37.958 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:37.958 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:37.958 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1478134 00:06:37.958 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:37.958 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:37.958 17:58:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1478134 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1478134 ']' 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.958 17:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.958 [2024-07-15 17:58:38.315959] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:37.958 [2024-07-15 17:58:38.316029] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478134 ] 00:06:37.958 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.217 [2024-07-15 17:58:38.397779] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.217 [2024-07-15 17:58:38.470694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.217 [2024-07-15 17:58:38.470780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.217 [2024-07-15 17:58:38.470863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.217 [2024-07-15 17:58:38.470866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:38.786 17:58:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.786 [2024-07-15 17:58:39.129279] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:38.786 [2024-07-15 17:58:39.129299] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:38.786 [2024-07-15 17:58:39.129311] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:38.786 [2024-07-15 17:58:39.129319] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:38.786 [2024-07-15 17:58:39.129326] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.786 17:58:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.786 17:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.046 [2024-07-15 17:58:39.200609] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:39.046 17:58:39 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.046 17:58:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:39.047 17:58:39 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.047 17:58:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 ************************************ 00:06:39.047 START TEST scheduler_create_thread 00:06:39.047 ************************************ 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 2 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 3 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 4 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 5 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 6 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 7 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 8 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 9 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 10 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.047 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.615 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.615 17:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:39.615 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.615 17:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.991 17:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.991 17:58:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:40.991 17:58:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:40.991 17:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.991 17:58:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.957 17:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.958 00:06:41.958 real 0m3.103s 00:06:41.958 user 0m0.021s 00:06:41.958 sys 0m0.009s 00:06:41.958 17:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.958 17:58:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.958 ************************************ 00:06:41.958 END TEST scheduler_create_thread 00:06:41.958 ************************************ 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:42.216 17:58:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:42.216 17:58:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1478134 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1478134 ']' 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1478134 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1478134 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1478134' 00:06:42.216 killing process with pid 1478134 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1478134 00:06:42.216 17:58:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1478134 00:06:42.474 [2024-07-15 17:58:42.723998] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:42.732 00:06:42.732 real 0m4.770s 00:06:42.732 user 0m9.172s 00:06:42.732 sys 0m0.441s 00:06:42.732 17:58:42 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.732 17:58:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.732 ************************************ 00:06:42.732 END TEST event_scheduler 00:06:42.732 ************************************ 00:06:42.732 17:58:42 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.732 17:58:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:42.732 17:58:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:42.732 17:58:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.732 17:58:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.732 17:58:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.732 ************************************ 00:06:42.732 START TEST app_repeat 00:06:42.732 ************************************ 00:06:42.732 17:58:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1478986 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1478986' 00:06:42.732 Process app_repeat pid: 1478986 00:06:42.732 17:58:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.733 17:58:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:42.733 spdk_app_start Round 0 00:06:42.733 17:58:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1478986 /var/tmp/spdk-nbd.sock 00:06:42.733 17:58:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1478986 ']' 00:06:42.733 17:58:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.733 17:58:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.733 17:58:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.733 17:58:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.733 17:58:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.733 [2024-07-15 17:58:43.056130] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:42.733 [2024-07-15 17:58:43.056191] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1478986 ] 00:06:42.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.998 [2024-07-15 17:58:43.138244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.998 [2024-07-15 17:58:43.214506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.998 [2024-07-15 17:58:43.214510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.566 17:58:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.566 17:58:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:43.566 17:58:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.823 Malloc0 00:06:43.823 17:58:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.082 Malloc1 00:06:44.082 17:58:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.082 /dev/nbd0 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.082 1+0 records in 00:06:44.082 1+0 records out 00:06:44.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229615 s, 17.8 MB/s 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.082 17:58:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.082 17:58:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.340 /dev/nbd1 00:06:44.340 17:58:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.340 17:58:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.340 17:58:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.341 1+0 records in 00:06:44.341 1+0 records out 00:06:44.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231211 s, 17.7 MB/s 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.341 17:58:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:44.341 17:58:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.341 17:58:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.341 17:58:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.341 17:58:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.341 17:58:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.600 { 00:06:44.600 "nbd_device": "/dev/nbd0", 00:06:44.600 "bdev_name": "Malloc0" 00:06:44.600 }, 00:06:44.600 { 00:06:44.600 "nbd_device": "/dev/nbd1", 00:06:44.600 "bdev_name": "Malloc1" 00:06:44.600 } 00:06:44.600 ]' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.600 { 00:06:44.600 "nbd_device": "/dev/nbd0", 00:06:44.600 "bdev_name": "Malloc0" 00:06:44.600 }, 00:06:44.600 { 00:06:44.600 "nbd_device": "/dev/nbd1", 00:06:44.600 "bdev_name": "Malloc1" 00:06:44.600 } 00:06:44.600 ]' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:44.600 /dev/nbd1' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:44.600 /dev/nbd1' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:44.600 256+0 records in 00:06:44.600 256+0 records out 00:06:44.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109447 s, 95.8 MB/s 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:44.600 256+0 records in 00:06:44.600 256+0 records out 00:06:44.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191084 s, 54.9 MB/s 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:44.600 256+0 records in 00:06:44.600 256+0 records out 00:06:44.600 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207436 s, 50.5 MB/s 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:44.600 17:58:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.859 17:58:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.118 17:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.378 17:58:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.378 17:58:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:45.643 17:58:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:45.643 [2024-07-15 17:58:46.005599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.902 [2024-07-15 17:58:46.069442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.902 [2024-07-15 17:58:46.069446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.902 [2024-07-15 17:58:46.109889] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:45.902 [2024-07-15 17:58:46.109933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:48.437 17:58:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:48.437 17:58:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:48.437 spdk_app_start Round 1 00:06:48.437 17:58:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1478986 /var/tmp/spdk-nbd.sock 00:06:48.437 17:58:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1478986 ']' 00:06:48.437 17:58:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:48.437 17:58:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.437 17:58:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:48.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:48.437 17:58:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.437 17:58:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:48.696 17:58:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.696 17:58:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.696 17:58:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.954 Malloc0 00:06:48.954 17:58:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.214 Malloc1 00:06:49.214 17:58:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.214 /dev/nbd0 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.214 1+0 records in 00:06:49.214 1+0 records out 00:06:49.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228329 s, 17.9 MB/s 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.214 17:58:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.214 17:58:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.473 /dev/nbd1 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.473 1+0 records in 00:06:49.473 1+0 records out 00:06:49.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238565 s, 17.2 MB/s 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:49.473 17:58:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.473 17:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.732 17:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.732 { 00:06:49.732 "nbd_device": "/dev/nbd0", 00:06:49.732 "bdev_name": "Malloc0" 00:06:49.732 }, 00:06:49.732 { 00:06:49.732 "nbd_device": "/dev/nbd1", 00:06:49.732 "bdev_name": "Malloc1" 00:06:49.732 } 00:06:49.732 ]' 00:06:49.732 17:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.732 { 00:06:49.732 "nbd_device": "/dev/nbd0", 00:06:49.732 "bdev_name": "Malloc0" 00:06:49.732 }, 00:06:49.732 { 00:06:49.732 "nbd_device": "/dev/nbd1", 00:06:49.732 "bdev_name": "Malloc1" 00:06:49.732 } 00:06:49.732 ]' 00:06:49.732 17:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.732 /dev/nbd1' 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.732 /dev/nbd1' 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.732 256+0 records in 00:06:49.732 256+0 records out 00:06:49.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114895 s, 91.3 MB/s 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.732 256+0 records in 00:06:49.732 256+0 records out 00:06:49.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173453 s, 60.5 MB/s 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.732 256+0 records in 00:06:49.732 256+0 records out 00:06:49.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206287 s, 50.8 MB/s 00:06:49.732 17:58:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.733 17:58:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.991 17:58:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.250 17:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.509 17:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.509 17:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.510 17:58:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.510 17:58:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.769 17:58:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.769 [2024-07-15 17:58:51.107520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.028 [2024-07-15 17:58:51.172488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.028 [2024-07-15 17:58:51.172491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.028 [2024-07-15 17:58:51.214053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.028 [2024-07-15 17:58:51.214095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.564 17:58:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.564 17:58:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:53.564 spdk_app_start Round 2 00:06:53.564 17:58:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1478986 /var/tmp/spdk-nbd.sock 00:06:53.564 17:58:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1478986 ']' 00:06:53.564 17:58:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.564 17:58:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.564 17:58:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.564 17:58:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.564 17:58:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.823 17:58:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.823 17:58:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:53.823 17:58:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.082 Malloc0 00:06:54.082 17:58:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:54.082 Malloc1 00:06:54.082 17:58:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.082 17:58:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:54.342 /dev/nbd0 00:06:54.342 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:54.342 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.342 1+0 records in 00:06:54.342 1+0 records out 00:06:54.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242705 s, 16.9 MB/s 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.342 17:58:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.342 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.342 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.342 17:58:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.602 /dev/nbd1 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.602 1+0 records in 00:06:54.602 1+0 records out 00:06:54.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000134135 s, 30.5 MB/s 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:54.602 17:58:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.602 17:58:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.861 17:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.861 { 00:06:54.861 "nbd_device": "/dev/nbd0", 00:06:54.861 "bdev_name": "Malloc0" 00:06:54.861 }, 00:06:54.861 { 00:06:54.862 "nbd_device": "/dev/nbd1", 00:06:54.862 "bdev_name": "Malloc1" 00:06:54.862 } 00:06:54.862 ]' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.862 { 00:06:54.862 "nbd_device": "/dev/nbd0", 00:06:54.862 "bdev_name": "Malloc0" 00:06:54.862 }, 00:06:54.862 { 00:06:54.862 "nbd_device": "/dev/nbd1", 00:06:54.862 "bdev_name": "Malloc1" 00:06:54.862 } 00:06:54.862 ]' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.862 /dev/nbd1' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.862 /dev/nbd1' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.862 256+0 records in 00:06:54.862 256+0 records out 00:06:54.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107095 s, 97.9 MB/s 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.862 256+0 records in 00:06:54.862 256+0 records out 00:06:54.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175284 s, 59.8 MB/s 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.862 256+0 records in 00:06:54.862 256+0 records out 00:06:54.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199583 s, 52.5 MB/s 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.862 17:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:55.122 17:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.381 17:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.381 17:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.381 17:58:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.381 17:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.381 17:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.382 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.641 17:58:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.641 17:58:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.641 17:58:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.900 [2024-07-15 17:58:56.174709] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.900 [2024-07-15 17:58:56.238283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.900 [2024-07-15 17:58:56.238288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.900 [2024-07-15 17:58:56.278773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.900 [2024-07-15 17:58:56.278818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.250 17:58:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1478986 /var/tmp/spdk-nbd.sock 00:06:59.250 17:58:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1478986 ']' 00:06:59.250 17:58:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.250 17:58:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.250 17:58:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.250 17:58:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.250 17:58:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:59.250 17:58:59 event.app_repeat -- event/event.sh@39 -- # killprocess 1478986 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1478986 ']' 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1478986 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1478986 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1478986' 00:06:59.250 killing process with pid 1478986 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1478986 00:06:59.250 17:58:59 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1478986 00:06:59.251 spdk_app_start is called in Round 0. 00:06:59.251 Shutdown signal received, stop current app iteration 00:06:59.251 Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 reinitialization... 00:06:59.251 spdk_app_start is called in Round 1. 00:06:59.251 Shutdown signal received, stop current app iteration 00:06:59.251 Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 reinitialization... 00:06:59.251 spdk_app_start is called in Round 2. 00:06:59.251 Shutdown signal received, stop current app iteration 00:06:59.251 Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 reinitialization... 00:06:59.251 spdk_app_start is called in Round 3. 00:06:59.251 Shutdown signal received, stop current app iteration 00:06:59.251 17:58:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:59.251 17:58:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:59.251 00:06:59.251 real 0m16.358s 00:06:59.251 user 0m34.750s 00:06:59.251 sys 0m3.089s 00:06:59.251 17:58:59 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.251 17:58:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.251 ************************************ 00:06:59.251 END TEST app_repeat 00:06:59.251 ************************************ 00:06:59.251 17:58:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:59.251 17:58:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:59.251 17:58:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:59.251 17:58:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.251 17:58:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.251 17:58:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.251 ************************************ 00:06:59.251 START TEST cpu_locks 00:06:59.251 ************************************ 00:06:59.251 17:58:59 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:59.251 * Looking for test storage... 00:06:59.251 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:59.251 17:58:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:59.251 17:58:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:59.251 17:58:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:59.251 17:58:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:59.251 17:58:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.251 17:58:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.251 17:58:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.251 ************************************ 00:06:59.251 START TEST default_locks 00:06:59.251 ************************************ 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1482136 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1482136 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1482136 ']' 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.251 17:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.509 [2024-07-15 17:58:59.663038] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:06:59.509 [2024-07-15 17:58:59.663083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482136 ] 00:06:59.509 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.509 [2024-07-15 17:58:59.744342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.509 [2024-07-15 17:58:59.817410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.076 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.076 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:00.076 17:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1482136 00:07:00.076 17:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1482136 00:07:00.076 17:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.644 lslocks: write error 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1482136 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1482136 ']' 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1482136 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482136 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482136' 00:07:00.644 killing process with pid 1482136 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1482136 00:07:00.644 17:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1482136 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1482136 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1482136 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1482136 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1482136 ']' 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1482136) - No such process 00:07:00.904 ERROR: process (pid: 1482136) is no longer running 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:00.904 00:07:00.904 real 0m1.566s 00:07:00.904 user 0m1.639s 00:07:00.904 sys 0m0.541s 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.904 17:59:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 ************************************ 00:07:00.904 END TEST default_locks 00:07:00.904 ************************************ 00:07:00.904 17:59:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:00.904 17:59:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:00.904 17:59:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.904 17:59:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.904 17:59:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 ************************************ 00:07:00.904 START TEST default_locks_via_rpc 00:07:00.904 ************************************ 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1482440 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1482440 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1482440 ']' 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.904 17:59:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.164 [2024-07-15 17:59:01.306830] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:01.164 [2024-07-15 17:59:01.306876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482440 ] 00:07:01.164 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.164 [2024-07-15 17:59:01.388145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.164 [2024-07-15 17:59:01.461834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1482440 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1482440 00:07:01.733 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1482440 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1482440 ']' 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1482440 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482440 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482440' 00:07:02.302 killing process with pid 1482440 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1482440 00:07:02.302 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1482440 00:07:02.562 00:07:02.562 real 0m1.578s 00:07:02.562 user 0m1.644s 00:07:02.562 sys 0m0.554s 00:07:02.562 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.562 17:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.562 ************************************ 00:07:02.562 END TEST default_locks_via_rpc 00:07:02.562 ************************************ 00:07:02.562 17:59:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:02.562 17:59:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:02.562 17:59:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.562 17:59:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.562 17:59:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.562 ************************************ 00:07:02.562 START TEST non_locking_app_on_locked_coremask 00:07:02.562 ************************************ 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1482741 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1482741 /var/tmp/spdk.sock 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1482741 ']' 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.562 17:59:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.821 [2024-07-15 17:59:02.965880] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:02.821 [2024-07-15 17:59:02.965925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482741 ] 00:07:02.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.821 [2024-07-15 17:59:03.048452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.821 [2024-07-15 17:59:03.122971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.394 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.394 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:03.394 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1482966 00:07:03.394 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1482966 /var/tmp/spdk2.sock 00:07:03.394 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:03.395 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1482966 ']' 00:07:03.395 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.395 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.395 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.395 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.395 17:59:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.653 [2024-07-15 17:59:03.808798] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:03.653 [2024-07-15 17:59:03.808854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482966 ] 00:07:03.653 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.653 [2024-07-15 17:59:03.925911] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.653 [2024-07-15 17:59:03.925939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.912 [2024-07-15 17:59:04.069623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.481 17:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.481 17:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:04.481 17:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1482741 00:07:04.481 17:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1482741 00:07:04.481 17:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.861 lslocks: write error 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1482741 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1482741 ']' 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1482741 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482741 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482741' 00:07:05.861 killing process with pid 1482741 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1482741 00:07:05.861 17:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1482741 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1482966 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1482966 ']' 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1482966 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1482966 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1482966' 00:07:06.430 killing process with pid 1482966 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1482966 00:07:06.430 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1482966 00:07:06.689 00:07:06.689 real 0m4.053s 00:07:06.689 user 0m4.286s 00:07:06.689 sys 0m1.402s 00:07:06.689 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.689 17:59:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.689 ************************************ 00:07:06.689 END TEST non_locking_app_on_locked_coremask 00:07:06.689 ************************************ 00:07:06.689 17:59:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.689 17:59:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:06.689 17:59:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.689 17:59:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.689 17:59:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.689 ************************************ 00:07:06.689 START TEST locking_app_on_unlocked_coremask 00:07:06.689 ************************************ 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1483569 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1483569 /var/tmp/spdk.sock 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1483569 ']' 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.689 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.948 [2024-07-15 17:59:07.103149] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:06.948 [2024-07-15 17:59:07.103198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483569 ] 00:07:06.948 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.948 [2024-07-15 17:59:07.185976] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.948 [2024-07-15 17:59:07.186001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.948 [2024-07-15 17:59:07.249635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1483589 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1483589 /var/tmp/spdk2.sock 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1483589 ']' 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.517 17:59:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:07.777 [2024-07-15 17:59:07.939336] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:07.777 [2024-07-15 17:59:07.939391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483589 ] 00:07:07.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.777 [2024-07-15 17:59:08.056946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.037 [2024-07-15 17:59:08.201617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.606 17:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.606 17:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:08.606 17:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1483589 00:07:08.606 17:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1483589 00:07:08.606 17:59:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.175 lslocks: write error 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1483569 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1483569 ']' 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1483569 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1483569 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1483569' 00:07:09.175 killing process with pid 1483569 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1483569 00:07:09.175 17:59:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1483569 00:07:10.112 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1483589 00:07:10.112 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1483589 ']' 00:07:10.112 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1483589 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1483589 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1483589' 00:07:10.113 killing process with pid 1483589 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1483589 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1483589 00:07:10.113 00:07:10.113 real 0m3.456s 00:07:10.113 user 0m3.640s 00:07:10.113 sys 0m1.145s 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.113 17:59:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.113 ************************************ 00:07:10.113 END TEST locking_app_on_unlocked_coremask 00:07:10.113 ************************************ 00:07:10.372 17:59:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:10.372 17:59:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:10.372 17:59:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.372 17:59:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.372 17:59:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.372 ************************************ 00:07:10.372 START TEST locking_app_on_locked_coremask 00:07:10.372 ************************************ 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1484146 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1484146 /var/tmp/spdk.sock 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1484146 ']' 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.372 17:59:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.372 [2024-07-15 17:59:10.636631] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:10.372 [2024-07-15 17:59:10.636676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484146 ] 00:07:10.372 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.372 [2024-07-15 17:59:10.717935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.631 [2024-07-15 17:59:10.790910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1484335 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1484335 /var/tmp/spdk2.sock 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1484335 /var/tmp/spdk2.sock 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1484335 /var/tmp/spdk2.sock 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1484335 ']' 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.199 17:59:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.199 [2024-07-15 17:59:11.483969] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:11.199 [2024-07-15 17:59:11.484029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484335 ] 00:07:11.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.459 [2024-07-15 17:59:11.601642] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1484146 has claimed it. 00:07:11.459 [2024-07-15 17:59:11.601682] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:11.756 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1484335) - No such process 00:07:11.757 ERROR: process (pid: 1484335) is no longer running 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1484146 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1484146 00:07:11.757 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.326 lslocks: write error 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1484146 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1484146 ']' 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1484146 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1484146 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1484146' 00:07:12.326 killing process with pid 1484146 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1484146 00:07:12.326 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1484146 00:07:12.585 00:07:12.585 real 0m2.229s 00:07:12.585 user 0m2.426s 00:07:12.585 sys 0m0.657s 00:07:12.585 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.585 17:59:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.585 ************************************ 00:07:12.585 END TEST locking_app_on_locked_coremask 00:07:12.585 ************************************ 00:07:12.585 17:59:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:12.585 17:59:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:12.585 17:59:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.585 17:59:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.585 17:59:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.585 ************************************ 00:07:12.585 START TEST locking_overlapped_coremask 00:07:12.585 ************************************ 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1484589 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1484589 /var/tmp/spdk.sock 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1484589 ']' 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.585 17:59:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.585 [2024-07-15 17:59:12.950561] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:12.585 [2024-07-15 17:59:12.950605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484589 ] 00:07:12.845 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.845 [2024-07-15 17:59:13.033409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.845 [2024-07-15 17:59:13.109843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.845 [2024-07-15 17:59:13.109940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.845 [2024-07-15 17:59:13.109942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1484722 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1484722 /var/tmp/spdk2.sock 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1484722 /var/tmp/spdk2.sock 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1484722 /var/tmp/spdk2.sock 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1484722 ']' 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.412 17:59:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.412 [2024-07-15 17:59:13.807507] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:13.412 [2024-07-15 17:59:13.807559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484722 ] 00:07:13.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.670 [2024-07-15 17:59:13.922938] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1484589 has claimed it. 00:07:13.670 [2024-07-15 17:59:13.922975] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.239 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1484722) - No such process 00:07:14.239 ERROR: process (pid: 1484722) is no longer running 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1484589 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1484589 ']' 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1484589 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1484589 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1484589' 00:07:14.239 killing process with pid 1484589 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1484589 00:07:14.239 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1484589 00:07:14.498 00:07:14.498 real 0m1.894s 00:07:14.498 user 0m5.248s 00:07:14.498 sys 0m0.484s 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.498 ************************************ 00:07:14.498 END TEST locking_overlapped_coremask 00:07:14.498 ************************************ 00:07:14.498 17:59:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.498 17:59:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:14.498 17:59:14 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.498 17:59:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.498 17:59:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.498 ************************************ 00:07:14.498 START TEST locking_overlapped_coremask_via_rpc 00:07:14.498 ************************************ 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1485012 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1485012 /var/tmp/spdk.sock 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1485012 ']' 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.498 17:59:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.758 [2024-07-15 17:59:14.923678] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:14.758 [2024-07-15 17:59:14.923722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485012 ] 00:07:14.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.758 [2024-07-15 17:59:15.005276] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:14.758 [2024-07-15 17:59:15.005298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.758 [2024-07-15 17:59:15.080400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.758 [2024-07-15 17:59:15.080496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.758 [2024-07-15 17:59:15.080497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1485041 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1485041 /var/tmp/spdk2.sock 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1485041 ']' 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.328 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.587 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.587 17:59:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.587 [2024-07-15 17:59:15.778023] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:15.587 [2024-07-15 17:59:15.778075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485041 ] 00:07:15.587 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.587 [2024-07-15 17:59:15.897947] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.587 [2024-07-15 17:59:15.897977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.845 [2024-07-15 17:59:16.048214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.845 [2024-07-15 17:59:16.048332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.845 [2024-07-15 17:59:16.048333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 [2024-07-15 17:59:16.600081] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1485012 has claimed it. 00:07:16.413 request: 00:07:16.413 { 00:07:16.413 "method": "framework_enable_cpumask_locks", 00:07:16.413 "req_id": 1 00:07:16.413 } 00:07:16.413 Got JSON-RPC error response 00:07:16.413 response: 00:07:16.413 { 00:07:16.413 "code": -32603, 00:07:16.413 "message": "Failed to claim CPU core: 2" 00:07:16.413 } 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1485012 /var/tmp/spdk.sock 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1485012 ']' 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1485041 /var/tmp/spdk2.sock 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1485041 ']' 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.413 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.672 00:07:16.672 real 0m2.113s 00:07:16.672 user 0m0.826s 00:07:16.672 sys 0m0.209s 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.672 17:59:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.672 ************************************ 00:07:16.672 END TEST locking_overlapped_coremask_via_rpc 00:07:16.672 ************************************ 00:07:16.672 17:59:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:16.672 17:59:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.672 17:59:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1485012 ]] 00:07:16.672 17:59:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1485012 00:07:16.672 17:59:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1485012 ']' 00:07:16.672 17:59:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1485012 00:07:16.672 17:59:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:16.672 17:59:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.672 17:59:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1485012 00:07:16.930 17:59:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.930 17:59:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.930 17:59:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1485012' 00:07:16.930 killing process with pid 1485012 00:07:16.930 17:59:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1485012 00:07:16.930 17:59:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1485012 00:07:17.189 17:59:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1485041 ]] 00:07:17.189 17:59:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1485041 00:07:17.189 17:59:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1485041 ']' 00:07:17.189 17:59:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1485041 00:07:17.189 17:59:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:17.189 17:59:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.190 17:59:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1485041 00:07:17.190 17:59:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:17.190 17:59:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:17.190 17:59:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1485041' 00:07:17.190 killing process with pid 1485041 00:07:17.190 17:59:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1485041 00:07:17.190 17:59:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1485041 00:07:17.449 17:59:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.449 17:59:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:17.449 17:59:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1485012 ]] 00:07:17.449 17:59:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1485012 00:07:17.449 17:59:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1485012 ']' 00:07:17.449 17:59:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1485012 00:07:17.449 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1485012) - No such process 00:07:17.449 17:59:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1485012 is not found' 00:07:17.450 Process with pid 1485012 is not found 00:07:17.450 17:59:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1485041 ]] 00:07:17.450 17:59:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1485041 00:07:17.450 17:59:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1485041 ']' 00:07:17.450 17:59:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1485041 00:07:17.450 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1485041) - No such process 00:07:17.450 17:59:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1485041 is not found' 00:07:17.450 Process with pid 1485041 is not found 00:07:17.450 17:59:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:17.450 00:07:17.450 real 0m18.310s 00:07:17.450 user 0m30.224s 00:07:17.450 sys 0m6.076s 00:07:17.450 17:59:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.450 17:59:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.450 ************************************ 00:07:17.450 END TEST cpu_locks 00:07:17.450 ************************************ 00:07:17.450 17:59:17 event -- common/autotest_common.sh@1142 -- # return 0 00:07:17.450 00:07:17.450 real 0m43.758s 00:07:17.450 user 1m20.767s 00:07:17.450 sys 0m10.339s 00:07:17.450 17:59:17 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.450 17:59:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:17.450 ************************************ 00:07:17.450 END TEST event 00:07:17.450 ************************************ 00:07:17.709 17:59:17 -- common/autotest_common.sh@1142 -- # return 0 00:07:17.709 17:59:17 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:17.709 17:59:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.709 17:59:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.709 17:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:17.709 ************************************ 00:07:17.709 START TEST thread 00:07:17.709 ************************************ 00:07:17.709 17:59:17 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:17.709 * Looking for test storage... 00:07:17.709 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:17.709 17:59:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.709 17:59:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:17.709 17:59:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.709 17:59:17 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.709 ************************************ 00:07:17.709 START TEST thread_poller_perf 00:07:17.709 ************************************ 00:07:17.709 17:59:18 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:17.709 [2024-07-15 17:59:18.051548] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:17.709 [2024-07-15 17:59:18.051629] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485653 ] 00:07:17.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.967 [2024-07-15 17:59:18.135682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.967 [2024-07-15 17:59:18.205039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.967 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:18.901 ====================================== 00:07:18.901 busy:2506231584 (cyc) 00:07:18.901 total_run_count: 431000 00:07:18.901 tsc_hz: 2500000000 (cyc) 00:07:18.901 ====================================== 00:07:18.901 poller_cost: 5814 (cyc), 2325 (nsec) 00:07:18.901 00:07:18.901 real 0m1.247s 00:07:18.901 user 0m1.147s 00:07:18.901 sys 0m0.096s 00:07:18.901 17:59:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.901 17:59:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:18.901 ************************************ 00:07:18.901 END TEST thread_poller_perf 00:07:18.901 ************************************ 00:07:19.159 17:59:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:19.159 17:59:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.159 17:59:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:19.159 17:59:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.159 17:59:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.159 ************************************ 00:07:19.159 START TEST thread_poller_perf 00:07:19.159 ************************************ 00:07:19.159 17:59:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.159 [2024-07-15 17:59:19.376371] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:19.159 [2024-07-15 17:59:19.376450] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485942 ] 00:07:19.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.159 [2024-07-15 17:59:19.457907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.159 [2024-07-15 17:59:19.525961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.159 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:20.534 ====================================== 00:07:20.534 busy:2501931182 (cyc) 00:07:20.534 total_run_count: 5619000 00:07:20.534 tsc_hz: 2500000000 (cyc) 00:07:20.534 ====================================== 00:07:20.534 poller_cost: 445 (cyc), 178 (nsec) 00:07:20.534 00:07:20.534 real 0m1.241s 00:07:20.534 user 0m1.141s 00:07:20.534 sys 0m0.096s 00:07:20.534 17:59:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.534 17:59:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.534 ************************************ 00:07:20.534 END TEST thread_poller_perf 00:07:20.534 ************************************ 00:07:20.534 17:59:20 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:20.534 17:59:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:20.534 00:07:20.534 real 0m2.735s 00:07:20.534 user 0m2.391s 00:07:20.534 sys 0m0.357s 00:07:20.534 17:59:20 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.534 17:59:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.534 ************************************ 00:07:20.534 END TEST thread 00:07:20.534 ************************************ 00:07:20.534 17:59:20 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.534 17:59:20 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:20.534 17:59:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.534 17:59:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.534 17:59:20 -- common/autotest_common.sh@10 -- # set +x 00:07:20.534 ************************************ 00:07:20.534 START TEST accel 00:07:20.534 ************************************ 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:20.534 * Looking for test storage... 00:07:20.534 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:20.534 17:59:20 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:20.534 17:59:20 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:20.534 17:59:20 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:20.534 17:59:20 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1486258 00:07:20.534 17:59:20 accel -- accel/accel.sh@63 -- # waitforlisten 1486258 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@829 -- # '[' -z 1486258 ']' 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.534 17:59:20 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.534 17:59:20 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.534 17:59:20 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.534 17:59:20 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.534 17:59:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.534 17:59:20 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.534 17:59:20 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.534 17:59:20 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.534 17:59:20 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:20.534 17:59:20 accel -- accel/accel.sh@41 -- # jq -r . 00:07:20.534 [2024-07-15 17:59:20.881189] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:20.534 [2024-07-15 17:59:20.881241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486258 ] 00:07:20.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.792 [2024-07-15 17:59:20.962467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.792 [2024-07-15 17:59:21.037569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.358 17:59:21 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.358 17:59:21 accel -- common/autotest_common.sh@862 -- # return 0 00:07:21.358 17:59:21 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:21.358 17:59:21 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:21.358 17:59:21 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:21.358 17:59:21 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:21.358 17:59:21 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:21.358 17:59:21 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:21.358 17:59:21 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.358 17:59:21 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:21.358 17:59:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 17:59:21 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.358 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.358 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.358 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.358 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # IFS== 00:07:21.359 17:59:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:21.359 17:59:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:21.359 17:59:21 accel -- accel/accel.sh@75 -- # killprocess 1486258 00:07:21.359 17:59:21 accel -- common/autotest_common.sh@948 -- # '[' -z 1486258 ']' 00:07:21.359 17:59:21 accel -- common/autotest_common.sh@952 -- # kill -0 1486258 00:07:21.359 17:59:21 accel -- common/autotest_common.sh@953 -- # uname 00:07:21.359 17:59:21 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.359 17:59:21 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486258 00:07:21.638 17:59:21 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.638 17:59:21 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.638 17:59:21 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486258' 00:07:21.638 killing process with pid 1486258 00:07:21.638 17:59:21 accel -- common/autotest_common.sh@967 -- # kill 1486258 00:07:21.638 17:59:21 accel -- common/autotest_common.sh@972 -- # wait 1486258 00:07:21.898 17:59:22 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:21.898 17:59:22 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:21.898 17:59:22 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.898 17:59:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.898 17:59:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.899 17:59:22 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:21.899 17:59:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:21.899 17:59:22 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.899 17:59:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:21.899 17:59:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.899 17:59:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:21.899 17:59:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.899 17:59:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.899 17:59:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.899 ************************************ 00:07:21.899 START TEST accel_missing_filename 00:07:21.899 ************************************ 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.899 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:21.899 17:59:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:21.899 [2024-07-15 17:59:22.263206] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:21.899 [2024-07-15 17:59:22.263263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486500 ] 00:07:22.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.158 [2024-07-15 17:59:22.346295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.158 [2024-07-15 17:59:22.415859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.158 [2024-07-15 17:59:22.456689] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.158 [2024-07-15 17:59:22.516236] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:22.419 A filename is required. 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.419 00:07:22.419 real 0m0.352s 00:07:22.419 user 0m0.244s 00:07:22.419 sys 0m0.145s 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.419 17:59:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:22.419 ************************************ 00:07:22.419 END TEST accel_missing_filename 00:07:22.419 ************************************ 00:07:22.419 17:59:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.419 17:59:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:22.419 17:59:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:22.419 17:59:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.419 17:59:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.419 ************************************ 00:07:22.419 START TEST accel_compress_verify 00:07:22.419 ************************************ 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.419 17:59:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:22.419 17:59:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:22.419 [2024-07-15 17:59:22.696498] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:22.419 [2024-07-15 17:59:22.696557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486596 ] 00:07:22.419 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.419 [2024-07-15 17:59:22.780218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.679 [2024-07-15 17:59:22.850543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.679 [2024-07-15 17:59:22.891154] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.679 [2024-07-15 17:59:22.950628] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:22.679 00:07:22.679 Compression does not support the verify option, aborting. 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.679 00:07:22.679 real 0m0.354s 00:07:22.679 user 0m0.253s 00:07:22.679 sys 0m0.141s 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.679 17:59:23 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:22.679 ************************************ 00:07:22.679 END TEST accel_compress_verify 00:07:22.679 ************************************ 00:07:22.679 17:59:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.679 17:59:23 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:22.679 17:59:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:22.679 17:59:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.679 17:59:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.948 ************************************ 00:07:22.948 START TEST accel_wrong_workload 00:07:22.948 ************************************ 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:22.948 17:59:23 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:22.948 Unsupported workload type: foobar 00:07:22.948 [2024-07-15 17:59:23.130325] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:22.948 accel_perf options: 00:07:22.948 [-h help message] 00:07:22.948 [-q queue depth per core] 00:07:22.948 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:22.948 [-T number of threads per core 00:07:22.948 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:22.948 [-t time in seconds] 00:07:22.948 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:22.948 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:22.948 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:22.948 [-l for compress/decompress workloads, name of uncompressed input file 00:07:22.948 [-S for crc32c workload, use this seed value (default 0) 00:07:22.948 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:22.948 [-f for fill workload, use this BYTE value (default 255) 00:07:22.948 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:22.948 [-y verify result if this switch is on] 00:07:22.948 [-a tasks to allocate per core (default: same value as -q)] 00:07:22.948 Can be used to spread operations across a wider range of memory. 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:22.948 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.949 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.949 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.949 00:07:22.949 real 0m0.038s 00:07:22.949 user 0m0.021s 00:07:22.949 sys 0m0.017s 00:07:22.949 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.949 17:59:23 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:22.949 ************************************ 00:07:22.949 END TEST accel_wrong_workload 00:07:22.949 ************************************ 00:07:22.949 Error: writing output failed: Broken pipe 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.949 17:59:23 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.949 ************************************ 00:07:22.949 START TEST accel_negative_buffers 00:07:22.949 ************************************ 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:22.949 17:59:23 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:22.949 -x option must be non-negative. 00:07:22.949 [2024-07-15 17:59:23.240311] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:22.949 accel_perf options: 00:07:22.949 [-h help message] 00:07:22.949 [-q queue depth per core] 00:07:22.949 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:22.949 [-T number of threads per core 00:07:22.949 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:22.949 [-t time in seconds] 00:07:22.949 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:22.949 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:22.949 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:22.949 [-l for compress/decompress workloads, name of uncompressed input file 00:07:22.949 [-S for crc32c workload, use this seed value (default 0) 00:07:22.949 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:22.949 [-f for fill workload, use this BYTE value (default 255) 00:07:22.949 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:22.949 [-y verify result if this switch is on] 00:07:22.949 [-a tasks to allocate per core (default: same value as -q)] 00:07:22.949 Can be used to spread operations across a wider range of memory. 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.949 00:07:22.949 real 0m0.036s 00:07:22.949 user 0m0.044s 00:07:22.949 sys 0m0.013s 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.949 17:59:23 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:22.949 ************************************ 00:07:22.949 END TEST accel_negative_buffers 00:07:22.949 ************************************ 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.949 17:59:23 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.949 17:59:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.949 ************************************ 00:07:22.949 START TEST accel_crc32c 00:07:22.949 ************************************ 00:07:22.949 17:59:23 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:22.949 17:59:23 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:23.212 [2024-07-15 17:59:23.346243] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:23.212 [2024-07-15 17:59:23.346298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486667 ] 00:07:23.212 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.212 [2024-07-15 17:59:23.429309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.212 [2024-07-15 17:59:23.498481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.212 17:59:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.589 17:59:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.589 00:07:24.589 real 0m1.355s 00:07:24.589 user 0m1.230s 00:07:24.589 sys 0m0.140s 00:07:24.589 17:59:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.589 17:59:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:24.589 ************************************ 00:07:24.589 END TEST accel_crc32c 00:07:24.589 ************************************ 00:07:24.589 17:59:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.589 17:59:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:24.589 17:59:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.589 17:59:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.589 17:59:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.589 ************************************ 00:07:24.589 START TEST accel_crc32c_C2 00:07:24.589 ************************************ 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:24.589 [2024-07-15 17:59:24.783073] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:24.589 [2024-07-15 17:59:24.783134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486952 ] 00:07:24.589 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.589 [2024-07-15 17:59:24.864618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.589 [2024-07-15 17:59:24.933602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:24.589 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.590 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.849 17:59:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.808 00:07:25.808 real 0m1.354s 00:07:25.808 user 0m1.231s 00:07:25.808 sys 0m0.141s 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.808 17:59:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:25.808 ************************************ 00:07:25.808 END TEST accel_crc32c_C2 00:07:25.808 ************************************ 00:07:25.808 17:59:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.808 17:59:26 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:25.808 17:59:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.808 17:59:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.808 17:59:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.808 ************************************ 00:07:25.808 START TEST accel_copy 00:07:25.808 ************************************ 00:07:25.808 17:59:26 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:25.808 17:59:26 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:26.069 [2024-07-15 17:59:26.218346] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:26.069 [2024-07-15 17:59:26.218406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487233 ] 00:07:26.069 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.069 [2024-07-15 17:59:26.301850] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.069 [2024-07-15 17:59:26.370355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.069 17:59:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:27.458 17:59:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.458 00:07:27.458 real 0m1.358s 00:07:27.458 user 0m1.221s 00:07:27.458 sys 0m0.151s 00:07:27.458 17:59:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.458 17:59:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:27.458 ************************************ 00:07:27.458 END TEST accel_copy 00:07:27.458 ************************************ 00:07:27.458 17:59:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.458 17:59:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.458 17:59:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.458 17:59:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.458 17:59:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.458 ************************************ 00:07:27.458 START TEST accel_fill 00:07:27.458 ************************************ 00:07:27.458 17:59:27 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:27.458 [2024-07-15 17:59:27.654647] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:27.458 [2024-07-15 17:59:27.654706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487521 ] 00:07:27.458 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.458 [2024-07-15 17:59:27.737985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.458 [2024-07-15 17:59:27.806143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:27.458 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.717 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.718 17:59:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.655 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:28.656 17:59:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.656 00:07:28.656 real 0m1.356s 00:07:28.656 user 0m1.225s 00:07:28.656 sys 0m0.147s 00:07:28.656 17:59:28 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.656 17:59:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:28.656 ************************************ 00:07:28.656 END TEST accel_fill 00:07:28.656 ************************************ 00:07:28.656 17:59:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.656 17:59:29 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:28.656 17:59:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:28.656 17:59:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.656 17:59:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.916 ************************************ 00:07:28.916 START TEST accel_copy_crc32c 00:07:28.916 ************************************ 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:28.916 [2024-07-15 17:59:29.090315] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:28.916 [2024-07-15 17:59:29.090373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487802 ] 00:07:28.916 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.916 [2024-07-15 17:59:29.173575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.916 [2024-07-15 17:59:29.241822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:28.916 17:59:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.295 00:07:30.295 real 0m1.356s 00:07:30.295 user 0m1.226s 00:07:30.295 sys 0m0.147s 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.295 17:59:30 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:30.295 ************************************ 00:07:30.295 END TEST accel_copy_crc32c 00:07:30.295 ************************************ 00:07:30.295 17:59:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.295 17:59:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.295 17:59:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:30.295 17:59:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.295 17:59:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.295 ************************************ 00:07:30.295 START TEST accel_copy_crc32c_C2 00:07:30.295 ************************************ 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.295 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:30.295 [2024-07-15 17:59:30.532428] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:30.295 [2024-07-15 17:59:30.532500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488092 ] 00:07:30.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.295 [2024-07-15 17:59:30.617272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.295 [2024-07-15 17:59:30.683044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.554 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.555 17:59:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.490 00:07:31.490 real 0m1.356s 00:07:31.490 user 0m1.231s 00:07:31.490 sys 0m0.139s 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.490 17:59:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:31.490 ************************************ 00:07:31.490 END TEST accel_copy_crc32c_C2 00:07:31.490 ************************************ 00:07:31.749 17:59:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.749 17:59:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:31.749 17:59:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.749 17:59:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.749 17:59:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.749 ************************************ 00:07:31.749 START TEST accel_dualcast 00:07:31.749 ************************************ 00:07:31.749 17:59:31 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:31.749 17:59:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:31.749 [2024-07-15 17:59:31.969718] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:31.749 [2024-07-15 17:59:31.969776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488371 ] 00:07:31.749 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.749 [2024-07-15 17:59:32.054216] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.749 [2024-07-15 17:59:32.122730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.008 17:59:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.944 17:59:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:32.945 17:59:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.945 00:07:32.945 real 0m1.361s 00:07:32.945 user 0m1.229s 00:07:32.945 sys 0m0.146s 00:07:32.945 17:59:33 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.945 17:59:33 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:32.945 ************************************ 00:07:32.945 END TEST accel_dualcast 00:07:32.945 ************************************ 00:07:32.945 17:59:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.945 17:59:33 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:32.945 17:59:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:32.945 17:59:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.945 17:59:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.205 ************************************ 00:07:33.205 START TEST accel_compare 00:07:33.205 ************************************ 00:07:33.205 17:59:33 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:33.205 [2024-07-15 17:59:33.408368] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:33.205 [2024-07-15 17:59:33.408432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488652 ] 00:07:33.205 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.205 [2024-07-15 17:59:33.491772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.205 [2024-07-15 17:59:33.560460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.205 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.465 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.465 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.466 17:59:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.420 17:59:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:34.421 17:59:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.421 00:07:34.421 real 0m1.357s 00:07:34.421 user 0m1.232s 00:07:34.421 sys 0m0.139s 00:07:34.421 17:59:34 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.421 17:59:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:34.421 ************************************ 00:07:34.421 END TEST accel_compare 00:07:34.421 ************************************ 00:07:34.421 17:59:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.421 17:59:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:34.421 17:59:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.421 17:59:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.421 17:59:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.421 ************************************ 00:07:34.421 START TEST accel_xor 00:07:34.421 ************************************ 00:07:34.421 17:59:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.421 17:59:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.679 17:59:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.679 17:59:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:34.679 17:59:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:34.679 [2024-07-15 17:59:34.841977] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:34.680 [2024-07-15 17:59:34.842109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488939 ] 00:07:34.680 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.680 [2024-07-15 17:59:34.924588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.680 [2024-07-15 17:59:34.992425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.680 17:59:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.055 00:07:36.055 real 0m1.358s 00:07:36.055 user 0m1.225s 00:07:36.055 sys 0m0.148s 00:07:36.055 17:59:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.055 17:59:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:36.055 ************************************ 00:07:36.055 END TEST accel_xor 00:07:36.055 ************************************ 00:07:36.055 17:59:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.055 17:59:36 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:36.055 17:59:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:36.055 17:59:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.055 17:59:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.055 ************************************ 00:07:36.055 START TEST accel_xor 00:07:36.055 ************************************ 00:07:36.055 17:59:36 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:36.055 17:59:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:36.055 [2024-07-15 17:59:36.275868] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:36.055 [2024-07-15 17:59:36.275924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489175 ] 00:07:36.055 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.055 [2024-07-15 17:59:36.359860] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.055 [2024-07-15 17:59:36.433214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.314 17:59:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:37.246 17:59:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.246 00:07:37.246 real 0m1.366s 00:07:37.246 user 0m1.239s 00:07:37.246 sys 0m0.141s 00:07:37.246 17:59:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.246 17:59:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:37.246 ************************************ 00:07:37.246 END TEST accel_xor 00:07:37.246 ************************************ 00:07:37.504 17:59:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.504 17:59:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:37.504 17:59:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:37.504 17:59:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.504 17:59:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.504 ************************************ 00:07:37.504 START TEST accel_dif_verify 00:07:37.504 ************************************ 00:07:37.504 17:59:37 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:37.504 17:59:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:37.504 [2024-07-15 17:59:37.721095] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:37.504 [2024-07-15 17:59:37.721151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489398 ] 00:07:37.504 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.504 [2024-07-15 17:59:37.804220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.504 [2024-07-15 17:59:37.872364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.763 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.764 17:59:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:38.701 17:59:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.701 00:07:38.701 real 0m1.359s 00:07:38.701 user 0m1.235s 00:07:38.701 sys 0m0.140s 00:07:38.701 17:59:39 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.701 17:59:39 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.701 ************************************ 00:07:38.701 END TEST accel_dif_verify 00:07:38.701 ************************************ 00:07:38.701 17:59:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.701 17:59:39 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:38.701 17:59:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.701 17:59:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.701 17:59:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.960 ************************************ 00:07:38.960 START TEST accel_dif_generate 00:07:38.960 ************************************ 00:07:38.960 17:59:39 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:38.960 [2024-07-15 17:59:39.157685] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:38.960 [2024-07-15 17:59:39.157741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489620 ] 00:07:38.960 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.960 [2024-07-15 17:59:39.242134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.960 [2024-07-15 17:59:39.311142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.960 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.961 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:39.220 17:59:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:40.159 17:59:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.159 00:07:40.159 real 0m1.359s 00:07:40.159 user 0m1.227s 00:07:40.159 sys 0m0.148s 00:07:40.159 17:59:40 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.159 17:59:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:40.159 ************************************ 00:07:40.159 END TEST accel_dif_generate 00:07:40.159 ************************************ 00:07:40.159 17:59:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.159 17:59:40 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:40.159 17:59:40 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:40.159 17:59:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.159 17:59:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.483 ************************************ 00:07:40.483 START TEST accel_dif_generate_copy 00:07:40.483 ************************************ 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:40.483 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:40.483 [2024-07-15 17:59:40.592519] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:40.483 [2024-07-15 17:59:40.592580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1489847 ] 00:07:40.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.484 [2024-07-15 17:59:40.676201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.484 [2024-07-15 17:59:40.745659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.484 17:59:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.862 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.863 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:41.863 17:59:41 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.863 00:07:41.863 real 0m1.360s 00:07:41.863 user 0m1.226s 00:07:41.863 sys 0m0.150s 00:07:41.863 17:59:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.863 17:59:41 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.863 ************************************ 00:07:41.863 END TEST accel_dif_generate_copy 00:07:41.863 ************************************ 00:07:41.863 17:59:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.863 17:59:41 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:41.863 17:59:41 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.863 17:59:41 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:41.863 17:59:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.863 17:59:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.863 ************************************ 00:07:41.863 START TEST accel_comp 00:07:41.863 ************************************ 00:07:41.863 17:59:42 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:41.863 [2024-07-15 17:59:42.018693] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:41.863 [2024-07-15 17:59:42.018737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490122 ] 00:07:41.863 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.863 [2024-07-15 17:59:42.093273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.863 [2024-07-15 17:59:42.163391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.863 17:59:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:43.253 17:59:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.253 00:07:43.253 real 0m1.341s 00:07:43.253 user 0m1.222s 00:07:43.253 sys 0m0.133s 00:07:43.253 17:59:43 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.253 17:59:43 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:43.253 ************************************ 00:07:43.253 END TEST accel_comp 00:07:43.253 ************************************ 00:07:43.253 17:59:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.253 17:59:43 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.253 17:59:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:43.253 17:59:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.253 17:59:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.253 ************************************ 00:07:43.253 START TEST accel_decomp 00:07:43.253 ************************************ 00:07:43.253 17:59:43 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:43.253 [2024-07-15 17:59:43.452456] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:43.253 [2024-07-15 17:59:43.452514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490403 ] 00:07:43.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.253 [2024-07-15 17:59:43.536079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.253 [2024-07-15 17:59:43.604522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.253 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.513 17:59:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.450 17:59:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.450 00:07:44.450 real 0m1.361s 00:07:44.450 user 0m1.226s 00:07:44.450 sys 0m0.151s 00:07:44.450 17:59:44 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.450 17:59:44 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:44.450 ************************************ 00:07:44.450 END TEST accel_decomp 00:07:44.450 ************************************ 00:07:44.450 17:59:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.450 17:59:44 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.450 17:59:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:44.450 17:59:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.450 17:59:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.709 ************************************ 00:07:44.709 START TEST accel_decomp_full 00:07:44.709 ************************************ 00:07:44.709 17:59:44 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:44.709 17:59:44 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:44.709 [2024-07-15 17:59:44.893395] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:44.709 [2024-07-15 17:59:44.893464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490688 ] 00:07:44.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.709 [2024-07-15 17:59:44.975280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.709 [2024-07-15 17:59:45.043614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:44.709 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.710 17:59:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.090 17:59:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.090 00:07:46.090 real 0m1.368s 00:07:46.090 user 0m1.244s 00:07:46.090 sys 0m0.138s 00:07:46.090 17:59:46 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.090 17:59:46 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:46.090 ************************************ 00:07:46.090 END TEST accel_decomp_full 00:07:46.090 ************************************ 00:07:46.090 17:59:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.090 17:59:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.090 17:59:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:46.090 17:59:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.090 17:59:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.090 ************************************ 00:07:46.090 START TEST accel_decomp_mcore 00:07:46.090 ************************************ 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:46.090 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:46.090 [2024-07-15 17:59:46.343332] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:46.090 [2024-07-15 17:59:46.343421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490974 ] 00:07:46.090 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.090 [2024-07-15 17:59:46.424951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.350 [2024-07-15 17:59:46.496277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.350 [2024-07-15 17:59:46.496372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.350 [2024-07-15 17:59:46.496456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.350 [2024-07-15 17:59:46.496474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.350 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.351 17:59:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.289 00:07:47.289 real 0m1.372s 00:07:47.289 user 0m4.563s 00:07:47.289 sys 0m0.154s 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.289 17:59:47 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:47.289 ************************************ 00:07:47.289 END TEST accel_decomp_mcore 00:07:47.289 ************************************ 00:07:47.549 17:59:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.549 17:59:47 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.549 17:59:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:47.549 17:59:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.549 17:59:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.549 ************************************ 00:07:47.549 START TEST accel_decomp_full_mcore 00:07:47.549 ************************************ 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:47.549 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:47.549 [2024-07-15 17:59:47.790870] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:47.549 [2024-07-15 17:59:47.790927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491261 ] 00:07:47.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.549 [2024-07-15 17:59:47.873089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.549 [2024-07-15 17:59:47.944054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.549 [2024-07-15 17:59:47.944148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.549 [2024-07-15 17:59:47.944216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.549 [2024-07-15 17:59:47.944218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:47.810 17:59:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.747 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.748 00:07:48.748 real 0m1.380s 00:07:48.748 user 0m4.604s 00:07:48.748 sys 0m0.145s 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.748 17:59:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:48.748 ************************************ 00:07:48.748 END TEST accel_decomp_full_mcore 00:07:48.748 ************************************ 00:07:49.007 17:59:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.007 17:59:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.007 17:59:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:49.007 17:59:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.007 17:59:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.007 ************************************ 00:07:49.007 START TEST accel_decomp_mthread 00:07:49.007 ************************************ 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:49.007 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:49.007 [2024-07-15 17:59:49.240714] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:49.007 [2024-07-15 17:59:49.240770] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491548 ] 00:07:49.007 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.007 [2024-07-15 17:59:49.320739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.007 [2024-07-15 17:59:49.388397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.266 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.267 17:59:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.202 00:07:50.202 real 0m1.352s 00:07:50.202 user 0m1.234s 00:07:50.202 sys 0m0.134s 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.202 17:59:50 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:50.202 ************************************ 00:07:50.202 END TEST accel_decomp_mthread 00:07:50.202 ************************************ 00:07:50.462 17:59:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.462 17:59:50 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.462 17:59:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:50.462 17:59:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.462 17:59:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.462 ************************************ 00:07:50.462 START TEST accel_decomp_full_mthread 00:07:50.462 ************************************ 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:50.462 [2024-07-15 17:59:50.669425] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:50.462 [2024-07-15 17:59:50.669485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491827 ] 00:07:50.462 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.462 [2024-07-15 17:59:50.747926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.462 [2024-07-15 17:59:50.817413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.462 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.722 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:50.723 17:59:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.657 00:07:51.657 real 0m1.372s 00:07:51.657 user 0m1.251s 00:07:51.657 sys 0m0.134s 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.657 17:59:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:51.657 ************************************ 00:07:51.657 END TEST accel_decomp_full_mthread 00:07:51.657 ************************************ 00:07:51.914 17:59:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.914 17:59:52 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:51.914 17:59:52 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.914 17:59:52 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:51.914 17:59:52 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.914 17:59:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.914 17:59:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.914 17:59:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.914 17:59:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.914 17:59:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.914 17:59:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.914 17:59:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.914 17:59:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:51.914 17:59:52 accel -- accel/accel.sh@41 -- # jq -r . 00:07:51.914 ************************************ 00:07:51.914 START TEST accel_dif_functional_tests 00:07:51.914 ************************************ 00:07:51.914 17:59:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.914 [2024-07-15 17:59:52.154653] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:51.914 [2024-07-15 17:59:52.154694] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492120 ] 00:07:51.914 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.914 [2024-07-15 17:59:52.233869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.914 [2024-07-15 17:59:52.303744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.914 [2024-07-15 17:59:52.303839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.914 [2024-07-15 17:59:52.303840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.172 00:07:52.172 00:07:52.172 CUnit - A unit testing framework for C - Version 2.1-3 00:07:52.172 http://cunit.sourceforge.net/ 00:07:52.172 00:07:52.172 00:07:52.172 Suite: accel_dif 00:07:52.172 Test: verify: DIF generated, GUARD check ...passed 00:07:52.172 Test: verify: DIF generated, APPTAG check ...passed 00:07:52.172 Test: verify: DIF generated, REFTAG check ...passed 00:07:52.172 Test: verify: DIF not generated, GUARD check ...[2024-07-15 17:59:52.371018] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:52.172 passed 00:07:52.172 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 17:59:52.371069] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:52.172 passed 00:07:52.172 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 17:59:52.371097] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:52.172 passed 00:07:52.172 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:52.172 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 17:59:52.371145] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:52.172 passed 00:07:52.172 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:52.172 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:52.172 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:52.172 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 17:59:52.371251] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:52.172 passed 00:07:52.172 Test: verify copy: DIF generated, GUARD check ...passed 00:07:52.172 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:52.172 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:52.172 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 17:59:52.371364] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:52.172 passed 00:07:52.172 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 17:59:52.371389] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:52.172 passed 00:07:52.172 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 17:59:52.371413] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:52.172 passed 00:07:52.172 Test: generate copy: DIF generated, GUARD check ...passed 00:07:52.172 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:52.172 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:52.172 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:52.172 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:52.172 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:52.172 Test: generate copy: iovecs-len validate ...[2024-07-15 17:59:52.371579] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:52.172 passed 00:07:52.172 Test: generate copy: buffer alignment validate ...passed 00:07:52.172 00:07:52.172 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.172 suites 1 1 n/a 0 0 00:07:52.172 tests 26 26 26 0 0 00:07:52.172 asserts 115 115 115 0 n/a 00:07:52.172 00:07:52.172 Elapsed time = 0.002 seconds 00:07:52.172 00:07:52.172 real 0m0.423s 00:07:52.172 user 0m0.556s 00:07:52.172 sys 0m0.176s 00:07:52.172 17:59:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.172 17:59:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:52.172 ************************************ 00:07:52.172 END TEST accel_dif_functional_tests 00:07:52.172 ************************************ 00:07:52.431 17:59:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.431 00:07:52.431 real 0m31.858s 00:07:52.431 user 0m34.683s 00:07:52.431 sys 0m5.281s 00:07:52.431 17:59:52 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.431 17:59:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.431 ************************************ 00:07:52.431 END TEST accel 00:07:52.431 ************************************ 00:07:52.431 17:59:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:52.431 17:59:52 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:52.431 17:59:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.431 17:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.431 17:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:52.431 ************************************ 00:07:52.431 START TEST accel_rpc 00:07:52.431 ************************************ 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:52.431 * Looking for test storage... 00:07:52.431 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:52.431 17:59:52 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:52.431 17:59:52 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1492186 00:07:52.431 17:59:52 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1492186 00:07:52.431 17:59:52 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1492186 ']' 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.431 17:59:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.431 [2024-07-15 17:59:52.829389] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:52.431 [2024-07-15 17:59:52.829438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492186 ] 00:07:52.689 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.689 [2024-07-15 17:59:52.913323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.689 [2024-07-15 17:59:52.986258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.260 17:59:53 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.260 17:59:53 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:53.260 17:59:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:53.260 17:59:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:53.260 17:59:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:53.260 17:59:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:53.260 17:59:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:53.260 17:59:53 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.260 17:59:53 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.260 17:59:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.260 ************************************ 00:07:53.260 START TEST accel_assign_opcode 00:07:53.260 ************************************ 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.260 [2024-07-15 17:59:53.652259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.260 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.519 [2024-07-15 17:59:53.660276] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.519 software 00:07:53.519 00:07:53.519 real 0m0.223s 00:07:53.519 user 0m0.043s 00:07:53.519 sys 0m0.015s 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.519 17:59:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:53.519 ************************************ 00:07:53.519 END TEST accel_assign_opcode 00:07:53.519 ************************************ 00:07:53.519 17:59:53 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:53.519 17:59:53 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1492186 00:07:53.519 17:59:53 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1492186 ']' 00:07:53.519 17:59:53 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1492186 00:07:53.519 17:59:53 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:53.519 17:59:53 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:53.777 17:59:53 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1492186 00:07:53.777 17:59:53 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:53.777 17:59:53 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:53.777 17:59:53 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1492186' 00:07:53.777 killing process with pid 1492186 00:07:53.777 17:59:53 accel_rpc -- common/autotest_common.sh@967 -- # kill 1492186 00:07:53.777 17:59:53 accel_rpc -- common/autotest_common.sh@972 -- # wait 1492186 00:07:54.036 00:07:54.036 real 0m1.607s 00:07:54.036 user 0m1.644s 00:07:54.036 sys 0m0.481s 00:07:54.036 17:59:54 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.036 17:59:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.036 ************************************ 00:07:54.036 END TEST accel_rpc 00:07:54.036 ************************************ 00:07:54.036 17:59:54 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.036 17:59:54 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:54.036 17:59:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.036 17:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.036 17:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:54.036 ************************************ 00:07:54.036 START TEST app_cmdline 00:07:54.036 ************************************ 00:07:54.036 17:59:54 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:54.295 * Looking for test storage... 00:07:54.295 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:54.295 17:59:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:54.295 17:59:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1492533 00:07:54.295 17:59:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1492533 00:07:54.295 17:59:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:54.295 17:59:54 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1492533 ']' 00:07:54.295 17:59:54 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.295 17:59:54 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.295 17:59:54 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.295 17:59:54 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.295 17:59:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.295 [2024-07-15 17:59:54.518244] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:07:54.295 [2024-07-15 17:59:54.518295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492533 ] 00:07:54.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.295 [2024-07-15 17:59:54.601337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.295 [2024-07-15 17:59:54.674517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.901 17:59:55 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.901 17:59:55 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:54.901 17:59:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:55.159 { 00:07:55.159 "version": "SPDK v24.09-pre git sha1 2da93d0d7", 00:07:55.159 "fields": { 00:07:55.159 "major": 24, 00:07:55.159 "minor": 9, 00:07:55.160 "patch": 0, 00:07:55.160 "suffix": "-pre", 00:07:55.160 "commit": "2da93d0d7" 00:07:55.160 } 00:07:55.160 } 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:55.160 17:59:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:55.160 17:59:55 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:55.419 request: 00:07:55.419 { 00:07:55.419 "method": "env_dpdk_get_mem_stats", 00:07:55.419 "req_id": 1 00:07:55.419 } 00:07:55.419 Got JSON-RPC error response 00:07:55.419 response: 00:07:55.419 { 00:07:55.419 "code": -32601, 00:07:55.419 "message": "Method not found" 00:07:55.419 } 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:55.419 17:59:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1492533 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1492533 ']' 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1492533 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1492533 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1492533' 00:07:55.419 killing process with pid 1492533 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@967 -- # kill 1492533 00:07:55.419 17:59:55 app_cmdline -- common/autotest_common.sh@972 -- # wait 1492533 00:07:55.677 00:07:55.677 real 0m1.678s 00:07:55.677 user 0m1.925s 00:07:55.677 sys 0m0.496s 00:07:55.677 17:59:56 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.677 17:59:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.677 ************************************ 00:07:55.677 END TEST app_cmdline 00:07:55.677 ************************************ 00:07:55.936 17:59:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:55.936 17:59:56 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:55.937 17:59:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.937 17:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.937 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:55.937 ************************************ 00:07:55.937 START TEST version 00:07:55.937 ************************************ 00:07:55.937 17:59:56 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:55.937 * Looking for test storage... 00:07:55.937 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:55.937 17:59:56 version -- app/version.sh@17 -- # get_header_version major 00:07:55.937 17:59:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # cut -f2 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.937 17:59:56 version -- app/version.sh@17 -- # major=24 00:07:55.937 17:59:56 version -- app/version.sh@18 -- # get_header_version minor 00:07:55.937 17:59:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # cut -f2 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.937 17:59:56 version -- app/version.sh@18 -- # minor=9 00:07:55.937 17:59:56 version -- app/version.sh@19 -- # get_header_version patch 00:07:55.937 17:59:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # cut -f2 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.937 17:59:56 version -- app/version.sh@19 -- # patch=0 00:07:55.937 17:59:56 version -- app/version.sh@20 -- # get_header_version suffix 00:07:55.937 17:59:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # cut -f2 00:07:55.937 17:59:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:55.937 17:59:56 version -- app/version.sh@20 -- # suffix=-pre 00:07:55.937 17:59:56 version -- app/version.sh@22 -- # version=24.9 00:07:55.937 17:59:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:55.937 17:59:56 version -- app/version.sh@28 -- # version=24.9rc0 00:07:55.937 17:59:56 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:55.937 17:59:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:55.937 17:59:56 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:55.937 17:59:56 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:55.937 00:07:55.937 real 0m0.190s 00:07:55.937 user 0m0.098s 00:07:55.937 sys 0m0.141s 00:07:55.937 17:59:56 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.937 17:59:56 version -- common/autotest_common.sh@10 -- # set +x 00:07:55.937 ************************************ 00:07:55.937 END TEST version 00:07:55.937 ************************************ 00:07:56.196 17:59:56 -- common/autotest_common.sh@1142 -- # return 0 00:07:56.196 17:59:56 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@198 -- # uname -s 00:07:56.196 17:59:56 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:56.196 17:59:56 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:56.196 17:59:56 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:56.196 17:59:56 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:56.196 17:59:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.196 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.196 17:59:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:56.196 17:59:56 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:56.196 17:59:56 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:56.196 17:59:56 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.196 17:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.196 17:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:56.196 ************************************ 00:07:56.196 START TEST nvmf_rdma 00:07:56.196 ************************************ 00:07:56.196 17:59:56 nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:56.196 * Looking for test storage... 00:07:56.196 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:56.196 17:59:56 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.196 17:59:56 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.196 17:59:56 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.196 17:59:56 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.196 17:59:56 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.196 17:59:56 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.196 17:59:56 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:56.196 17:59:56 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:56.196 17:59:56 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.196 17:59:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:56.196 17:59:56 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:56.196 17:59:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.196 17:59:56 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.196 17:59:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:56.456 ************************************ 00:07:56.456 START TEST nvmf_example 00:07:56.456 ************************************ 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:56.456 * Looking for test storage... 00:07:56.456 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:56.456 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.457 17:59:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:04.575 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:04.575 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:04.575 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:04.576 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:04.576 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:04.576 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:04.576 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:04.576 altname enp217s0f0np0 00:08:04.576 altname ens818f0np0 00:08:04.576 inet 192.168.100.8/24 scope global mlx_0_0 00:08:04.576 valid_lft forever preferred_lft forever 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:04.576 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:04.576 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:04.576 altname enp217s0f1np1 00:08:04.576 altname ens818f1np1 00:08:04.576 inet 192.168.100.9/24 scope global mlx_0_1 00:08:04.576 valid_lft forever preferred_lft forever 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:04.576 192.168.100.9' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:04.576 192.168.100.9' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:04.576 192.168.100.9' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:04.576 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1497212 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1497212 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1497212 ']' 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.577 18:00:04 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:04.577 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.836 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:05.094 18:00:05 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:05.094 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.343 Initializing NVMe Controllers 00:08:17.343 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:17.343 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:17.343 Initialization complete. Launching workers. 00:08:17.343 ======================================================== 00:08:17.343 Latency(us) 00:08:17.343 Device Information : IOPS MiB/s Average min max 00:08:17.343 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26535.90 103.66 2411.68 617.36 12027.94 00:08:17.343 ======================================================== 00:08:17.343 Total : 26535.90 103.66 2411.68 617.36 12027.94 00:08:17.343 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:17.343 rmmod nvme_rdma 00:08:17.343 rmmod nvme_fabrics 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1497212 ']' 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1497212 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1497212 ']' 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1497212 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:17.343 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.344 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1497212 00:08:17.344 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:17.344 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:17.344 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1497212' 00:08:17.344 killing process with pid 1497212 00:08:17.344 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # kill 1497212 00:08:17.344 18:00:16 nvmf_rdma.nvmf_example -- common/autotest_common.sh@972 -- # wait 1497212 00:08:17.344 nvmf threads initialize successfully 00:08:17.344 bdev subsystem init successfully 00:08:17.344 created a nvmf target service 00:08:17.344 create targets's poll groups done 00:08:17.344 all subsystems of target started 00:08:17.344 nvmf target is running 00:08:17.344 all subsystems of target stopped 00:08:17.344 destroy targets's poll groups done 00:08:17.344 destroyed the nvmf target service 00:08:17.344 bdev subsystem finish successfully 00:08:17.344 nvmf threads destroy successfully 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:17.344 00:08:17.344 real 0m20.453s 00:08:17.344 user 0m52.327s 00:08:17.344 sys 0m6.230s 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.344 18:00:17 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:17.344 ************************************ 00:08:17.344 END TEST nvmf_example 00:08:17.344 ************************************ 00:08:17.344 18:00:17 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:17.344 18:00:17 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:17.344 18:00:17 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:17.344 18:00:17 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.344 18:00:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:17.344 ************************************ 00:08:17.344 START TEST nvmf_filesystem 00:08:17.344 ************************************ 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:17.344 * Looking for test storage... 00:08:17.344 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:17.344 18:00:17 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:17.345 #define SPDK_CONFIG_H 00:08:17.345 #define SPDK_CONFIG_APPS 1 00:08:17.345 #define SPDK_CONFIG_ARCH native 00:08:17.345 #undef SPDK_CONFIG_ASAN 00:08:17.345 #undef SPDK_CONFIG_AVAHI 00:08:17.345 #undef SPDK_CONFIG_CET 00:08:17.345 #define SPDK_CONFIG_COVERAGE 1 00:08:17.345 #define SPDK_CONFIG_CROSS_PREFIX 00:08:17.345 #undef SPDK_CONFIG_CRYPTO 00:08:17.345 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:17.345 #undef SPDK_CONFIG_CUSTOMOCF 00:08:17.345 #undef SPDK_CONFIG_DAOS 00:08:17.345 #define SPDK_CONFIG_DAOS_DIR 00:08:17.345 #define SPDK_CONFIG_DEBUG 1 00:08:17.345 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:17.345 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:17.345 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:17.345 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:17.345 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:17.345 #undef SPDK_CONFIG_DPDK_UADK 00:08:17.345 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:17.345 #define SPDK_CONFIG_EXAMPLES 1 00:08:17.345 #undef SPDK_CONFIG_FC 00:08:17.345 #define SPDK_CONFIG_FC_PATH 00:08:17.345 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:17.345 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:17.345 #undef SPDK_CONFIG_FUSE 00:08:17.345 #undef SPDK_CONFIG_FUZZER 00:08:17.345 #define SPDK_CONFIG_FUZZER_LIB 00:08:17.345 #undef SPDK_CONFIG_GOLANG 00:08:17.345 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:17.345 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:17.345 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:17.345 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:17.345 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:17.345 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:17.345 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:17.345 #define SPDK_CONFIG_IDXD 1 00:08:17.345 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:17.345 #undef SPDK_CONFIG_IPSEC_MB 00:08:17.345 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:17.345 #define SPDK_CONFIG_ISAL 1 00:08:17.345 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:17.345 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:17.345 #define SPDK_CONFIG_LIBDIR 00:08:17.345 #undef SPDK_CONFIG_LTO 00:08:17.345 #define SPDK_CONFIG_MAX_LCORES 128 00:08:17.345 #define SPDK_CONFIG_NVME_CUSE 1 00:08:17.345 #undef SPDK_CONFIG_OCF 00:08:17.345 #define SPDK_CONFIG_OCF_PATH 00:08:17.345 #define SPDK_CONFIG_OPENSSL_PATH 00:08:17.345 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:17.345 #define SPDK_CONFIG_PGO_DIR 00:08:17.345 #undef SPDK_CONFIG_PGO_USE 00:08:17.345 #define SPDK_CONFIG_PREFIX /usr/local 00:08:17.345 #undef SPDK_CONFIG_RAID5F 00:08:17.345 #undef SPDK_CONFIG_RBD 00:08:17.345 #define SPDK_CONFIG_RDMA 1 00:08:17.345 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:17.345 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:17.345 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:17.345 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:17.345 #define SPDK_CONFIG_SHARED 1 00:08:17.345 #undef SPDK_CONFIG_SMA 00:08:17.345 #define SPDK_CONFIG_TESTS 1 00:08:17.345 #undef SPDK_CONFIG_TSAN 00:08:17.345 #define SPDK_CONFIG_UBLK 1 00:08:17.345 #define SPDK_CONFIG_UBSAN 1 00:08:17.345 #undef SPDK_CONFIG_UNIT_TESTS 00:08:17.345 #undef SPDK_CONFIG_URING 00:08:17.345 #define SPDK_CONFIG_URING_PATH 00:08:17.345 #undef SPDK_CONFIG_URING_ZNS 00:08:17.345 #undef SPDK_CONFIG_USDT 00:08:17.345 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:17.345 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:17.345 #undef SPDK_CONFIG_VFIO_USER 00:08:17.345 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:17.345 #define SPDK_CONFIG_VHOST 1 00:08:17.345 #define SPDK_CONFIG_VIRTIO 1 00:08:17.345 #undef SPDK_CONFIG_VTUNE 00:08:17.345 #define SPDK_CONFIG_VTUNE_DIR 00:08:17.345 #define SPDK_CONFIG_WERROR 1 00:08:17.345 #define SPDK_CONFIG_WPDK_DIR 00:08:17.345 #undef SPDK_CONFIG_XNVME 00:08:17.345 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:17.345 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.346 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1499851 ]] 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1499851 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.VVNF4r 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VVNF4r/tests/target /tmp/spdk.VVNF4r 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=951066624 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4333363200 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=51051274240 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742268416 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10690994176 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30815498240 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871134208 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12338700288 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348456960 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9756672 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30869917696 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871134208 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1216512 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174220288 00:08:17.347 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174224384 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:17.348 * Looking for test storage... 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=51051274240 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12905586688 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.348 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:17.348 18:00:17 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:17.349 18:00:17 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:25.467 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:25.467 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:25.467 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:25.467 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.467 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:25.468 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:25.468 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:25.468 altname enp217s0f0np0 00:08:25.468 altname ens818f0np0 00:08:25.468 inet 192.168.100.8/24 scope global mlx_0_0 00:08:25.468 valid_lft forever preferred_lft forever 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:25.468 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:25.468 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:25.468 altname enp217s0f1np1 00:08:25.468 altname ens818f1np1 00:08:25.468 inet 192.168.100.9/24 scope global mlx_0_1 00:08:25.468 valid_lft forever preferred_lft forever 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:25.468 192.168.100.9' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:25.468 192.168.100.9' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:25.468 192.168.100.9' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 ************************************ 00:08:25.468 START TEST nvmf_filesystem_no_in_capsule 00:08:25.468 ************************************ 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1503768 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1503768 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1503768 ']' 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.468 18:00:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:25.468 [2024-07-15 18:00:25.744741] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:08:25.468 [2024-07-15 18:00:25.744790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.468 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.468 [2024-07-15 18:00:25.829484] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.727 [2024-07-15 18:00:25.906298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.727 [2024-07-15 18:00:25.906340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.727 [2024-07-15 18:00:25.906351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.727 [2024-07-15 18:00:25.906359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.727 [2024-07-15 18:00:25.906382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.727 [2024-07-15 18:00:25.906428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.727 [2024-07-15 18:00:25.906520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.727 [2024-07-15 18:00:25.906607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.727 [2024-07-15 18:00:25.906608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.296 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.296 [2024-07-15 18:00:26.609799] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:26.296 [2024-07-15 18:00:26.631578] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x88ef80/0x893470) succeed. 00:08:26.296 [2024-07-15 18:00:26.640806] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8905c0/0x8d4b00) succeed. 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 Malloc1 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.555 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.556 [2024-07-15 18:00:26.881255] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:26.556 { 00:08:26.556 "name": "Malloc1", 00:08:26.556 "aliases": [ 00:08:26.556 "eac783ae-5cee-4a84-b9d0-c0a2f834e45c" 00:08:26.556 ], 00:08:26.556 "product_name": "Malloc disk", 00:08:26.556 "block_size": 512, 00:08:26.556 "num_blocks": 1048576, 00:08:26.556 "uuid": "eac783ae-5cee-4a84-b9d0-c0a2f834e45c", 00:08:26.556 "assigned_rate_limits": { 00:08:26.556 "rw_ios_per_sec": 0, 00:08:26.556 "rw_mbytes_per_sec": 0, 00:08:26.556 "r_mbytes_per_sec": 0, 00:08:26.556 "w_mbytes_per_sec": 0 00:08:26.556 }, 00:08:26.556 "claimed": true, 00:08:26.556 "claim_type": "exclusive_write", 00:08:26.556 "zoned": false, 00:08:26.556 "supported_io_types": { 00:08:26.556 "read": true, 00:08:26.556 "write": true, 00:08:26.556 "unmap": true, 00:08:26.556 "flush": true, 00:08:26.556 "reset": true, 00:08:26.556 "nvme_admin": false, 00:08:26.556 "nvme_io": false, 00:08:26.556 "nvme_io_md": false, 00:08:26.556 "write_zeroes": true, 00:08:26.556 "zcopy": true, 00:08:26.556 "get_zone_info": false, 00:08:26.556 "zone_management": false, 00:08:26.556 "zone_append": false, 00:08:26.556 "compare": false, 00:08:26.556 "compare_and_write": false, 00:08:26.556 "abort": true, 00:08:26.556 "seek_hole": false, 00:08:26.556 "seek_data": false, 00:08:26.556 "copy": true, 00:08:26.556 "nvme_iov_md": false 00:08:26.556 }, 00:08:26.556 "memory_domains": [ 00:08:26.556 { 00:08:26.556 "dma_device_id": "system", 00:08:26.556 "dma_device_type": 1 00:08:26.556 }, 00:08:26.556 { 00:08:26.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.556 "dma_device_type": 2 00:08:26.556 } 00:08:26.556 ], 00:08:26.556 "driver_specific": {} 00:08:26.556 } 00:08:26.556 ]' 00:08:26.556 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:26.817 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:26.817 18:00:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:26.817 18:00:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:26.817 18:00:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:26.817 18:00:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:26.817 18:00:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:26.817 18:00:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:27.804 18:00:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.804 18:00:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:27.804 18:00:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.804 18:00:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:27.804 18:00:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:29.706 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:29.964 18:00:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.342 ************************************ 00:08:31.342 START TEST filesystem_ext4 00:08:31.342 ************************************ 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:31.342 mke2fs 1.46.5 (30-Dec-2021) 00:08:31.342 Discarding device blocks: 0/522240 done 00:08:31.342 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:31.342 Filesystem UUID: 7695925d-a1a1-46e0-8a28-9f7e60c25aef 00:08:31.342 Superblock backups stored on blocks: 00:08:31.342 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:31.342 00:08:31.342 Allocating group tables: 0/64 done 00:08:31.342 Writing inode tables: 0/64 done 00:08:31.342 Creating journal (8192 blocks): done 00:08:31.342 Writing superblocks and filesystem accounting information: 0/64 done 00:08:31.342 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1503768 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.342 00:08:31.342 real 0m0.189s 00:08:31.342 user 0m0.031s 00:08:31.342 sys 0m0.073s 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:31.342 ************************************ 00:08:31.342 END TEST filesystem_ext4 00:08:31.342 ************************************ 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.342 ************************************ 00:08:31.342 START TEST filesystem_btrfs 00:08:31.342 ************************************ 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:31.342 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:31.634 btrfs-progs v6.6.2 00:08:31.634 See https://btrfs.readthedocs.io for more information. 00:08:31.634 00:08:31.634 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:31.634 NOTE: several default settings have changed in version 5.15, please make sure 00:08:31.634 this does not affect your deployments: 00:08:31.634 - DUP for metadata (-m dup) 00:08:31.634 - enabled no-holes (-O no-holes) 00:08:31.634 - enabled free-space-tree (-R free-space-tree) 00:08:31.634 00:08:31.634 Label: (null) 00:08:31.634 UUID: 6230259c-8385-4d21-8f37-74f95b13af1b 00:08:31.634 Node size: 16384 00:08:31.634 Sector size: 4096 00:08:31.634 Filesystem size: 510.00MiB 00:08:31.634 Block group profiles: 00:08:31.634 Data: single 8.00MiB 00:08:31.634 Metadata: DUP 32.00MiB 00:08:31.634 System: DUP 8.00MiB 00:08:31.634 SSD detected: yes 00:08:31.634 Zoned device: no 00:08:31.634 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:31.634 Runtime features: free-space-tree 00:08:31.634 Checksum: crc32c 00:08:31.634 Number of devices: 1 00:08:31.634 Devices: 00:08:31.634 ID SIZE PATH 00:08:31.634 1 510.00MiB /dev/nvme0n1p1 00:08:31.634 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1503768 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.634 00:08:31.634 real 0m0.255s 00:08:31.634 user 0m0.019s 00:08:31.634 sys 0m0.141s 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 ************************************ 00:08:31.634 END TEST filesystem_btrfs 00:08:31.634 ************************************ 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 ************************************ 00:08:31.634 START TEST filesystem_xfs 00:08:31.634 ************************************ 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:31.634 18:00:31 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:31.892 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:31.892 = sectsz=512 attr=2, projid32bit=1 00:08:31.892 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:31.892 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:31.892 data = bsize=4096 blocks=130560, imaxpct=25 00:08:31.893 = sunit=0 swidth=0 blks 00:08:31.893 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:31.893 log =internal log bsize=4096 blocks=16384, version=2 00:08:31.893 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:31.893 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:31.893 Discarding blocks...Done. 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1503768 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.893 00:08:31.893 real 0m0.206s 00:08:31.893 user 0m0.023s 00:08:31.893 sys 0m0.087s 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:31.893 ************************************ 00:08:31.893 END TEST filesystem_xfs 00:08:31.893 ************************************ 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:31.893 18:00:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:32.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.827 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:32.827 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:32.827 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:32.827 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1503768 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1503768 ']' 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1503768 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1503768 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1503768' 00:08:33.087 killing process with pid 1503768 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1503768 00:08:33.087 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1503768 00:08:33.346 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.346 00:08:33.346 real 0m8.028s 00:08:33.346 user 0m31.229s 00:08:33.346 sys 0m1.276s 00:08:33.346 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.346 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.346 ************************************ 00:08:33.346 END TEST nvmf_filesystem_no_in_capsule 00:08:33.346 ************************************ 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.604 ************************************ 00:08:33.604 START TEST nvmf_filesystem_in_capsule 00:08:33.604 ************************************ 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1505430 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1505430 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1505430 ']' 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.604 18:00:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.604 [2024-07-15 18:00:33.857229] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:08:33.604 [2024-07-15 18:00:33.857279] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.604 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.604 [2024-07-15 18:00:33.940269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.863 [2024-07-15 18:00:34.014907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.863 [2024-07-15 18:00:34.014947] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.863 [2024-07-15 18:00:34.014957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.863 [2024-07-15 18:00:34.014966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.863 [2024-07-15 18:00:34.014989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.863 [2024-07-15 18:00:34.015038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.863 [2024-07-15 18:00:34.015090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.863 [2024-07-15 18:00:34.015176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.863 [2024-07-15 18:00:34.015177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.431 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.431 [2024-07-15 18:00:34.743035] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x181ff80/0x1824470) succeed. 00:08:34.431 [2024-07-15 18:00:34.752333] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18215c0/0x1865b00) succeed. 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.690 Malloc1 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.690 18:00:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.690 [2024-07-15 18:00:35.016388] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.690 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:34.690 { 00:08:34.690 "name": "Malloc1", 00:08:34.690 "aliases": [ 00:08:34.690 "cdec1c0f-4356-47ed-bed2-1189e4a54107" 00:08:34.690 ], 00:08:34.690 "product_name": "Malloc disk", 00:08:34.690 "block_size": 512, 00:08:34.690 "num_blocks": 1048576, 00:08:34.690 "uuid": "cdec1c0f-4356-47ed-bed2-1189e4a54107", 00:08:34.690 "assigned_rate_limits": { 00:08:34.690 "rw_ios_per_sec": 0, 00:08:34.690 "rw_mbytes_per_sec": 0, 00:08:34.690 "r_mbytes_per_sec": 0, 00:08:34.690 "w_mbytes_per_sec": 0 00:08:34.690 }, 00:08:34.690 "claimed": true, 00:08:34.690 "claim_type": "exclusive_write", 00:08:34.690 "zoned": false, 00:08:34.690 "supported_io_types": { 00:08:34.690 "read": true, 00:08:34.690 "write": true, 00:08:34.690 "unmap": true, 00:08:34.690 "flush": true, 00:08:34.690 "reset": true, 00:08:34.690 "nvme_admin": false, 00:08:34.690 "nvme_io": false, 00:08:34.690 "nvme_io_md": false, 00:08:34.690 "write_zeroes": true, 00:08:34.690 "zcopy": true, 00:08:34.690 "get_zone_info": false, 00:08:34.690 "zone_management": false, 00:08:34.690 "zone_append": false, 00:08:34.690 "compare": false, 00:08:34.690 "compare_and_write": false, 00:08:34.690 "abort": true, 00:08:34.690 "seek_hole": false, 00:08:34.690 "seek_data": false, 00:08:34.690 "copy": true, 00:08:34.691 "nvme_iov_md": false 00:08:34.691 }, 00:08:34.691 "memory_domains": [ 00:08:34.691 { 00:08:34.691 "dma_device_id": "system", 00:08:34.691 "dma_device_type": 1 00:08:34.691 }, 00:08:34.691 { 00:08:34.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.691 "dma_device_type": 2 00:08:34.691 } 00:08:34.691 ], 00:08:34.691 "driver_specific": {} 00:08:34.691 } 00:08:34.691 ]' 00:08:34.691 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:34.949 18:00:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:35.893 18:00:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.893 18:00:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:35.893 18:00:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.893 18:00:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:35.893 18:00:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:37.818 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:37.818 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:37.818 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:37.819 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:38.076 18:00:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:39.453 ************************************ 00:08:39.453 START TEST filesystem_in_capsule_ext4 00:08:39.453 ************************************ 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:39.453 mke2fs 1.46.5 (30-Dec-2021) 00:08:39.453 Discarding device blocks: 0/522240 done 00:08:39.453 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:39.453 Filesystem UUID: b80bad72-4a91-4033-ac7a-dc3b054e78c2 00:08:39.453 Superblock backups stored on blocks: 00:08:39.453 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:39.453 00:08:39.453 Allocating group tables: 0/64 done 00:08:39.453 Writing inode tables: 0/64 done 00:08:39.453 Creating journal (8192 blocks): done 00:08:39.453 Writing superblocks and filesystem accounting information: 0/64 done 00:08:39.453 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1505430 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.453 00:08:39.453 real 0m0.188s 00:08:39.453 user 0m0.035s 00:08:39.453 sys 0m0.069s 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:39.453 ************************************ 00:08:39.453 END TEST filesystem_in_capsule_ext4 00:08:39.453 ************************************ 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:39.453 ************************************ 00:08:39.453 START TEST filesystem_in_capsule_btrfs 00:08:39.453 ************************************ 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:39.453 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:39.712 btrfs-progs v6.6.2 00:08:39.712 See https://btrfs.readthedocs.io for more information. 00:08:39.712 00:08:39.712 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:39.712 NOTE: several default settings have changed in version 5.15, please make sure 00:08:39.712 this does not affect your deployments: 00:08:39.712 - DUP for metadata (-m dup) 00:08:39.712 - enabled no-holes (-O no-holes) 00:08:39.712 - enabled free-space-tree (-R free-space-tree) 00:08:39.712 00:08:39.712 Label: (null) 00:08:39.712 UUID: 235d7bb7-3bc4-4f70-b37d-d5a9ee669ef4 00:08:39.712 Node size: 16384 00:08:39.712 Sector size: 4096 00:08:39.712 Filesystem size: 510.00MiB 00:08:39.712 Block group profiles: 00:08:39.712 Data: single 8.00MiB 00:08:39.712 Metadata: DUP 32.00MiB 00:08:39.712 System: DUP 8.00MiB 00:08:39.712 SSD detected: yes 00:08:39.712 Zoned device: no 00:08:39.712 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:39.712 Runtime features: free-space-tree 00:08:39.712 Checksum: crc32c 00:08:39.712 Number of devices: 1 00:08:39.712 Devices: 00:08:39.712 ID SIZE PATH 00:08:39.712 1 510.00MiB /dev/nvme0n1p1 00:08:39.712 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:39.712 18:00:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1505430 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.712 00:08:39.712 real 0m0.263s 00:08:39.712 user 0m0.023s 00:08:39.712 sys 0m0.148s 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:39.712 ************************************ 00:08:39.712 END TEST filesystem_in_capsule_btrfs 00:08:39.712 ************************************ 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.712 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:39.971 ************************************ 00:08:39.971 START TEST filesystem_in_capsule_xfs 00:08:39.971 ************************************ 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:39.971 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:39.971 = sectsz=512 attr=2, projid32bit=1 00:08:39.971 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:39.971 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:39.971 data = bsize=4096 blocks=130560, imaxpct=25 00:08:39.971 = sunit=0 swidth=0 blks 00:08:39.971 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:39.971 log =internal log bsize=4096 blocks=16384, version=2 00:08:39.971 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:39.971 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:39.971 Discarding blocks...Done. 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1505430 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.971 00:08:39.971 real 0m0.208s 00:08:39.971 user 0m0.027s 00:08:39.971 sys 0m0.082s 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.971 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:39.971 ************************************ 00:08:39.971 END TEST filesystem_in_capsule_xfs 00:08:39.971 ************************************ 00:08:40.229 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:40.229 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:40.229 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:40.229 18:00:40 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1505430 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1505430 ']' 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1505430 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505430 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505430' 00:08:41.178 killing process with pid 1505430 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1505430 00:08:41.178 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1505430 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:41.747 00:08:41.747 real 0m8.097s 00:08:41.747 user 0m31.483s 00:08:41.747 sys 0m1.317s 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:41.747 ************************************ 00:08:41.747 END TEST nvmf_filesystem_in_capsule 00:08:41.747 ************************************ 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:41.747 rmmod nvme_rdma 00:08:41.747 rmmod nvme_fabrics 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:41.747 00:08:41.747 real 0m24.817s 00:08:41.747 user 1m5.257s 00:08:41.747 sys 0m8.984s 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.747 18:00:41 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:41.747 ************************************ 00:08:41.747 END TEST nvmf_filesystem 00:08:41.747 ************************************ 00:08:41.747 18:00:42 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:41.747 18:00:42 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:41.747 18:00:42 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.747 18:00:42 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.747 18:00:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:41.747 ************************************ 00:08:41.747 START TEST nvmf_target_discovery 00:08:41.747 ************************************ 00:08:41.747 18:00:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:42.006 * Looking for test storage... 00:08:42.006 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.006 18:00:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.007 18:00:42 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:50.154 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:50.154 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:50.154 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:50.154 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:50.154 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:50.155 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.155 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:50.155 altname enp217s0f0np0 00:08:50.155 altname ens818f0np0 00:08:50.155 inet 192.168.100.8/24 scope global mlx_0_0 00:08:50.155 valid_lft forever preferred_lft forever 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:50.155 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:50.155 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:50.155 altname enp217s0f1np1 00:08:50.155 altname ens818f1np1 00:08:50.155 inet 192.168.100.9/24 scope global mlx_0_1 00:08:50.155 valid_lft forever preferred_lft forever 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:50.155 192.168.100.9' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:50.155 192.168.100.9' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:50.155 192.168.100.9' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1510899 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1510899 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1510899 ']' 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.155 18:00:49 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.155 [2024-07-15 18:00:49.648662] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:08:50.155 [2024-07-15 18:00:49.648709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.155 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.155 [2024-07-15 18:00:49.730460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.155 [2024-07-15 18:00:49.807595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.155 [2024-07-15 18:00:49.807633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.155 [2024-07-15 18:00:49.807642] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.155 [2024-07-15 18:00:49.807650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.155 [2024-07-15 18:00:49.807657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.155 [2024-07-15 18:00:49.807731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.155 [2024-07-15 18:00:49.811027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.155 [2024-07-15 18:00:49.811049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.155 [2024-07-15 18:00:49.811051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.155 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.155 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.156 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.156 [2024-07-15 18:00:50.532512] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a7f80/0x21ac470) succeed. 00:08:50.156 [2024-07-15 18:00:50.542247] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21a95c0/0x21edb00) succeed. 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 Null1 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 [2024-07-15 18:00:50.710383] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 Null2 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 Null3 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 Null4 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.415 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:50.673 00:08:50.673 Discovery Log Number of Records 6, Generation counter 6 00:08:50.673 =====Discovery Log Entry 0====== 00:08:50.673 trtype: rdma 00:08:50.673 adrfam: ipv4 00:08:50.673 subtype: current discovery subsystem 00:08:50.673 treq: not required 00:08:50.673 portid: 0 00:08:50.673 trsvcid: 4420 00:08:50.673 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.673 traddr: 192.168.100.8 00:08:50.673 eflags: explicit discovery connections, duplicate discovery information 00:08:50.673 rdma_prtype: not specified 00:08:50.673 rdma_qptype: connected 00:08:50.673 rdma_cms: rdma-cm 00:08:50.673 rdma_pkey: 0x0000 00:08:50.673 =====Discovery Log Entry 1====== 00:08:50.673 trtype: rdma 00:08:50.673 adrfam: ipv4 00:08:50.673 subtype: nvme subsystem 00:08:50.673 treq: not required 00:08:50.673 portid: 0 00:08:50.673 trsvcid: 4420 00:08:50.673 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:50.673 traddr: 192.168.100.8 00:08:50.673 eflags: none 00:08:50.673 rdma_prtype: not specified 00:08:50.673 rdma_qptype: connected 00:08:50.673 rdma_cms: rdma-cm 00:08:50.673 rdma_pkey: 0x0000 00:08:50.673 =====Discovery Log Entry 2====== 00:08:50.673 trtype: rdma 00:08:50.673 adrfam: ipv4 00:08:50.673 subtype: nvme subsystem 00:08:50.673 treq: not required 00:08:50.673 portid: 0 00:08:50.673 trsvcid: 4420 00:08:50.673 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:50.673 traddr: 192.168.100.8 00:08:50.673 eflags: none 00:08:50.673 rdma_prtype: not specified 00:08:50.673 rdma_qptype: connected 00:08:50.673 rdma_cms: rdma-cm 00:08:50.673 rdma_pkey: 0x0000 00:08:50.673 =====Discovery Log Entry 3====== 00:08:50.673 trtype: rdma 00:08:50.673 adrfam: ipv4 00:08:50.673 subtype: nvme subsystem 00:08:50.673 treq: not required 00:08:50.673 portid: 0 00:08:50.673 trsvcid: 4420 00:08:50.673 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:50.673 traddr: 192.168.100.8 00:08:50.673 eflags: none 00:08:50.673 rdma_prtype: not specified 00:08:50.673 rdma_qptype: connected 00:08:50.673 rdma_cms: rdma-cm 00:08:50.673 rdma_pkey: 0x0000 00:08:50.673 =====Discovery Log Entry 4====== 00:08:50.673 trtype: rdma 00:08:50.673 adrfam: ipv4 00:08:50.673 subtype: nvme subsystem 00:08:50.673 treq: not required 00:08:50.673 portid: 0 00:08:50.673 trsvcid: 4420 00:08:50.673 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:50.673 traddr: 192.168.100.8 00:08:50.673 eflags: none 00:08:50.673 rdma_prtype: not specified 00:08:50.673 rdma_qptype: connected 00:08:50.673 rdma_cms: rdma-cm 00:08:50.673 rdma_pkey: 0x0000 00:08:50.673 =====Discovery Log Entry 5====== 00:08:50.673 trtype: rdma 00:08:50.673 adrfam: ipv4 00:08:50.673 subtype: discovery subsystem referral 00:08:50.673 treq: not required 00:08:50.673 portid: 0 00:08:50.673 trsvcid: 4430 00:08:50.673 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.673 traddr: 192.168.100.8 00:08:50.673 eflags: none 00:08:50.673 rdma_prtype: unrecognized 00:08:50.673 rdma_qptype: unrecognized 00:08:50.673 rdma_cms: unrecognized 00:08:50.673 rdma_pkey: 0x0000 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:50.673 Perform nvmf subsystem discovery via RPC 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.673 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.673 [ 00:08:50.673 { 00:08:50.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:50.673 "subtype": "Discovery", 00:08:50.673 "listen_addresses": [ 00:08:50.673 { 00:08:50.673 "trtype": "RDMA", 00:08:50.673 "adrfam": "IPv4", 00:08:50.673 "traddr": "192.168.100.8", 00:08:50.673 "trsvcid": "4420" 00:08:50.673 } 00:08:50.673 ], 00:08:50.673 "allow_any_host": true, 00:08:50.673 "hosts": [] 00:08:50.673 }, 00:08:50.673 { 00:08:50.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.673 "subtype": "NVMe", 00:08:50.673 "listen_addresses": [ 00:08:50.673 { 00:08:50.673 "trtype": "RDMA", 00:08:50.673 "adrfam": "IPv4", 00:08:50.673 "traddr": "192.168.100.8", 00:08:50.673 "trsvcid": "4420" 00:08:50.673 } 00:08:50.673 ], 00:08:50.673 "allow_any_host": true, 00:08:50.673 "hosts": [], 00:08:50.673 "serial_number": "SPDK00000000000001", 00:08:50.673 "model_number": "SPDK bdev Controller", 00:08:50.673 "max_namespaces": 32, 00:08:50.673 "min_cntlid": 1, 00:08:50.673 "max_cntlid": 65519, 00:08:50.673 "namespaces": [ 00:08:50.673 { 00:08:50.673 "nsid": 1, 00:08:50.673 "bdev_name": "Null1", 00:08:50.673 "name": "Null1", 00:08:50.673 "nguid": "5F4762CD737B4732BA10AC8C03CDEEB8", 00:08:50.673 "uuid": "5f4762cd-737b-4732-ba10-ac8c03cdeeb8" 00:08:50.673 } 00:08:50.673 ] 00:08:50.673 }, 00:08:50.673 { 00:08:50.673 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.673 "subtype": "NVMe", 00:08:50.673 "listen_addresses": [ 00:08:50.673 { 00:08:50.673 "trtype": "RDMA", 00:08:50.673 "adrfam": "IPv4", 00:08:50.673 "traddr": "192.168.100.8", 00:08:50.673 "trsvcid": "4420" 00:08:50.673 } 00:08:50.673 ], 00:08:50.673 "allow_any_host": true, 00:08:50.673 "hosts": [], 00:08:50.673 "serial_number": "SPDK00000000000002", 00:08:50.673 "model_number": "SPDK bdev Controller", 00:08:50.673 "max_namespaces": 32, 00:08:50.673 "min_cntlid": 1, 00:08:50.673 "max_cntlid": 65519, 00:08:50.673 "namespaces": [ 00:08:50.673 { 00:08:50.673 "nsid": 1, 00:08:50.673 "bdev_name": "Null2", 00:08:50.673 "name": "Null2", 00:08:50.673 "nguid": "F1D49504A2C14AA9953BFF7424F28767", 00:08:50.673 "uuid": "f1d49504-a2c1-4aa9-953b-ff7424f28767" 00:08:50.673 } 00:08:50.673 ] 00:08:50.673 }, 00:08:50.673 { 00:08:50.673 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:50.673 "subtype": "NVMe", 00:08:50.673 "listen_addresses": [ 00:08:50.673 { 00:08:50.673 "trtype": "RDMA", 00:08:50.673 "adrfam": "IPv4", 00:08:50.673 "traddr": "192.168.100.8", 00:08:50.673 "trsvcid": "4420" 00:08:50.673 } 00:08:50.673 ], 00:08:50.673 "allow_any_host": true, 00:08:50.673 "hosts": [], 00:08:50.673 "serial_number": "SPDK00000000000003", 00:08:50.673 "model_number": "SPDK bdev Controller", 00:08:50.673 "max_namespaces": 32, 00:08:50.673 "min_cntlid": 1, 00:08:50.673 "max_cntlid": 65519, 00:08:50.673 "namespaces": [ 00:08:50.673 { 00:08:50.674 "nsid": 1, 00:08:50.674 "bdev_name": "Null3", 00:08:50.674 "name": "Null3", 00:08:50.674 "nguid": "BFEE72EABD134AF998037B58F0739142", 00:08:50.674 "uuid": "bfee72ea-bd13-4af9-9803-7b58f0739142" 00:08:50.674 } 00:08:50.674 ] 00:08:50.674 }, 00:08:50.674 { 00:08:50.674 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:50.674 "subtype": "NVMe", 00:08:50.674 "listen_addresses": [ 00:08:50.674 { 00:08:50.674 "trtype": "RDMA", 00:08:50.674 "adrfam": "IPv4", 00:08:50.674 "traddr": "192.168.100.8", 00:08:50.674 "trsvcid": "4420" 00:08:50.674 } 00:08:50.674 ], 00:08:50.674 "allow_any_host": true, 00:08:50.674 "hosts": [], 00:08:50.674 "serial_number": "SPDK00000000000004", 00:08:50.674 "model_number": "SPDK bdev Controller", 00:08:50.674 "max_namespaces": 32, 00:08:50.674 "min_cntlid": 1, 00:08:50.674 "max_cntlid": 65519, 00:08:50.674 "namespaces": [ 00:08:50.674 { 00:08:50.674 "nsid": 1, 00:08:50.674 "bdev_name": "Null4", 00:08:50.674 "name": "Null4", 00:08:50.674 "nguid": "A5A2DC728C11497EB0EA9630F9F009E6", 00:08:50.674 "uuid": "a5a2dc72-8c11-497e-b0ea-9630f9f009e6" 00:08:50.674 } 00:08:50.674 ] 00:08:50.674 } 00:08:50.674 ] 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:50 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:50.674 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:50.932 rmmod nvme_rdma 00:08:50.932 rmmod nvme_fabrics 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1510899 ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1510899 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1510899 ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1510899 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510899 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510899' 00:08:50.932 killing process with pid 1510899 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1510899 00:08:50.932 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1510899 00:08:51.190 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.190 18:00:51 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:51.190 00:08:51.190 real 0m9.368s 00:08:51.190 user 0m8.426s 00:08:51.190 sys 0m6.069s 00:08:51.190 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.190 18:00:51 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:51.190 ************************************ 00:08:51.190 END TEST nvmf_target_discovery 00:08:51.190 ************************************ 00:08:51.190 18:00:51 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:51.190 18:00:51 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:51.190 18:00:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.190 18:00:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.190 18:00:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:51.190 ************************************ 00:08:51.190 START TEST nvmf_referrals 00:08:51.190 ************************************ 00:08:51.190 18:00:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:51.449 * Looking for test storage... 00:08:51.449 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:51.449 18:00:51 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.577 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.577 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:59.577 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:59.578 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:59.578 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:59.578 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:59.578 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:59.578 18:00:58 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:59.578 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.578 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:59.578 altname enp217s0f0np0 00:08:59.578 altname ens818f0np0 00:08:59.578 inet 192.168.100.8/24 scope global mlx_0_0 00:08:59.578 valid_lft forever preferred_lft forever 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:59.578 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.578 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:59.578 altname enp217s0f1np1 00:08:59.578 altname ens818f1np1 00:08:59.578 inet 192.168.100.9/24 scope global mlx_0_1 00:08:59.578 valid_lft forever preferred_lft forever 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.578 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:59.579 192.168.100.9' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:59.579 192.168.100.9' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:59.579 192.168.100.9' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1515204 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1515204 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1515204 ']' 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.579 18:00:59 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.579 [2024-07-15 18:00:59.233056] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:08:59.579 [2024-07-15 18:00:59.233110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.579 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.579 [2024-07-15 18:00:59.315266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.579 [2024-07-15 18:00:59.388491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.579 [2024-07-15 18:00:59.388528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.579 [2024-07-15 18:00:59.388537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.579 [2024-07-15 18:00:59.388545] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.579 [2024-07-15 18:00:59.388552] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.579 [2024-07-15 18:00:59.388641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.579 [2024-07-15 18:00:59.388734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.579 [2024-07-15 18:00:59.388821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.579 [2024-07-15 18:00:59.388822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:59.840 [2024-07-15 18:01:00.112716] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x191af80/0x191f470) succeed. 00:08:59.840 [2024-07-15 18:01:00.122058] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x191c5c0/0x1960b00) succeed. 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.840 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 [2024-07-15 18:01:00.246257] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.101 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.361 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.620 18:01:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:00.620 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:00.620 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:00.620 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.620 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:00.879 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.138 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:01.397 rmmod nvme_rdma 00:09:01.397 rmmod nvme_fabrics 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1515204 ']' 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1515204 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1515204 ']' 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1515204 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1515204 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1515204' 00:09:01.397 killing process with pid 1515204 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1515204 00:09:01.397 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1515204 00:09:01.656 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.656 18:01:01 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:01.656 00:09:01.656 real 0m10.371s 00:09:01.656 user 0m12.674s 00:09:01.656 sys 0m6.530s 00:09:01.656 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.656 18:01:01 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:01.656 ************************************ 00:09:01.656 END TEST nvmf_referrals 00:09:01.656 ************************************ 00:09:01.656 18:01:01 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:01.656 18:01:01 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:01.656 18:01:01 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.656 18:01:01 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.656 18:01:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:01.656 ************************************ 00:09:01.656 START TEST nvmf_connect_disconnect 00:09:01.656 ************************************ 00:09:01.656 18:01:01 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:01.934 * Looking for test storage... 00:09:01.934 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.934 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.935 18:01:02 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:10.067 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:10.067 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:10.067 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:10.067 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:10.067 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:10.068 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:10.068 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:10.068 altname enp217s0f0np0 00:09:10.068 altname ens818f0np0 00:09:10.068 inet 192.168.100.8/24 scope global mlx_0_0 00:09:10.068 valid_lft forever preferred_lft forever 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:10.068 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:10.068 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:10.068 altname enp217s0f1np1 00:09:10.068 altname ens818f1np1 00:09:10.068 inet 192.168.100.9/24 scope global mlx_0_1 00:09:10.068 valid_lft forever preferred_lft forever 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:10.068 192.168.100.9' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:10.068 192.168.100.9' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:10.068 192.168.100.9' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1519497 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1519497 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1519497 ']' 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.068 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.069 18:01:09 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:10.069 [2024-07-15 18:01:09.555466] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:09:10.069 [2024-07-15 18:01:09.555519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.069 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.069 [2024-07-15 18:01:09.636148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.069 [2024-07-15 18:01:09.709435] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.069 [2024-07-15 18:01:09.709475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.069 [2024-07-15 18:01:09.709484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.069 [2024-07-15 18:01:09.709492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.069 [2024-07-15 18:01:09.709514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.069 [2024-07-15 18:01:09.709559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.069 [2024-07-15 18:01:09.709653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.069 [2024-07-15 18:01:09.709740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.069 [2024-07-15 18:01:09.709742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.069 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.069 [2024-07-15 18:01:10.413904] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:10.069 [2024-07-15 18:01:10.435577] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20f6f80/0x20fb470) succeed. 00:09:10.069 [2024-07-15 18:01:10.444788] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20f85c0/0x213cb00) succeed. 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:10.329 [2024-07-15 18:01:10.586005] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:10.329 18:01:10 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:14.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:30.417 rmmod nvme_rdma 00:09:30.417 rmmod nvme_fabrics 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1519497 ']' 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1519497 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1519497 ']' 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1519497 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1519497 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1519497' 00:09:30.417 killing process with pid 1519497 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1519497 00:09:30.417 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1519497 00:09:30.677 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:30.677 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:30.677 00:09:30.677 real 0m28.844s 00:09:30.677 user 1m25.910s 00:09:30.677 sys 0m6.668s 00:09:30.677 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.677 18:01:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.677 ************************************ 00:09:30.677 END TEST nvmf_connect_disconnect 00:09:30.677 ************************************ 00:09:30.677 18:01:30 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:30.677 18:01:30 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:30.677 18:01:30 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:30.677 18:01:30 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.677 18:01:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:30.677 ************************************ 00:09:30.677 START TEST nvmf_multitarget 00:09:30.677 ************************************ 00:09:30.677 18:01:30 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:30.677 * Looking for test storage... 00:09:30.677 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.677 18:01:31 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.678 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:30.678 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:30.678 18:01:31 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:30.678 18:01:31 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:38.798 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:38.798 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:38.798 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:38.798 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:38.798 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:38.799 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.799 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:38.799 altname enp217s0f0np0 00:09:38.799 altname ens818f0np0 00:09:38.799 inet 192.168.100.8/24 scope global mlx_0_0 00:09:38.799 valid_lft forever preferred_lft forever 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:38.799 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:38.799 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:38.799 altname enp217s0f1np1 00:09:38.799 altname ens818f1np1 00:09:38.799 inet 192.168.100.9/24 scope global mlx_0_1 00:09:38.799 valid_lft forever preferred_lft forever 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:38.799 192.168.100.9' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:38.799 192.168.100.9' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:38.799 192.168.100.9' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1527190 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1527190 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1527190 ']' 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:38.799 18:01:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.799 [2024-07-15 18:01:38.505908] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:09:38.799 [2024-07-15 18:01:38.505960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.799 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.799 [2024-07-15 18:01:38.588740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.799 [2024-07-15 18:01:38.661622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.799 [2024-07-15 18:01:38.661662] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.799 [2024-07-15 18:01:38.661671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.799 [2024-07-15 18:01:38.661679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.799 [2024-07-15 18:01:38.661702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.799 [2024-07-15 18:01:38.661751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.799 [2024-07-15 18:01:38.661844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.799 [2024-07-15 18:01:38.661932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.799 [2024-07-15 18:01:38.661934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:39.065 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:39.394 "nvmf_tgt_1" 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:39.394 "nvmf_tgt_2" 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:39.394 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:39.653 true 00:09:39.653 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:39.653 true 00:09:39.653 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:39.653 18:01:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:39.912 rmmod nvme_rdma 00:09:39.912 rmmod nvme_fabrics 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1527190 ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1527190 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1527190 ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1527190 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1527190 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1527190' 00:09:39.912 killing process with pid 1527190 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1527190 00:09:39.912 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1527190 00:09:40.171 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.171 18:01:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:40.171 00:09:40.171 real 0m9.469s 00:09:40.171 user 0m9.463s 00:09:40.171 sys 0m6.261s 00:09:40.171 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.171 18:01:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:40.172 ************************************ 00:09:40.172 END TEST nvmf_multitarget 00:09:40.172 ************************************ 00:09:40.172 18:01:40 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:40.172 18:01:40 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:40.172 18:01:40 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.172 18:01:40 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.172 18:01:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:40.172 ************************************ 00:09:40.172 START TEST nvmf_rpc 00:09:40.172 ************************************ 00:09:40.172 18:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:40.172 * Looking for test storage... 00:09:40.172 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.431 18:01:40 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.432 18:01:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:48.615 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:48.615 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:48.615 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:48.615 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:48.615 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:48.616 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:48.616 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:48.616 altname enp217s0f0np0 00:09:48.616 altname ens818f0np0 00:09:48.616 inet 192.168.100.8/24 scope global mlx_0_0 00:09:48.616 valid_lft forever preferred_lft forever 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:48.616 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:48.616 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:48.616 altname enp217s0f1np1 00:09:48.616 altname ens818f1np1 00:09:48.616 inet 192.168.100.9/24 scope global mlx_0_1 00:09:48.616 valid_lft forever preferred_lft forever 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.616 18:01:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.616 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:48.616 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:48.616 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:48.616 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:48.616 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:48.616 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:48.876 192.168.100.9' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:48.876 192.168.100.9' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:48.876 192.168.100.9' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1531564 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1531564 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1531564 ']' 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.876 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.876 [2024-07-15 18:01:49.116505] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:09:48.876 [2024-07-15 18:01:49.116554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.876 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.876 [2024-07-15 18:01:49.201369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.876 [2024-07-15 18:01:49.272681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.876 [2024-07-15 18:01:49.272726] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.876 [2024-07-15 18:01:49.272736] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.876 [2024-07-15 18:01:49.272744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.876 [2024-07-15 18:01:49.272767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.876 [2024-07-15 18:01:49.272817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.876 [2024-07-15 18:01:49.272911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.876 [2024-07-15 18:01:49.272995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.876 [2024-07-15 18:01:49.272996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.812 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:49.813 "tick_rate": 2500000000, 00:09:49.813 "poll_groups": [ 00:09:49.813 { 00:09:49.813 "name": "nvmf_tgt_poll_group_000", 00:09:49.813 "admin_qpairs": 0, 00:09:49.813 "io_qpairs": 0, 00:09:49.813 "current_admin_qpairs": 0, 00:09:49.813 "current_io_qpairs": 0, 00:09:49.813 "pending_bdev_io": 0, 00:09:49.813 "completed_nvme_io": 0, 00:09:49.813 "transports": [] 00:09:49.813 }, 00:09:49.813 { 00:09:49.813 "name": "nvmf_tgt_poll_group_001", 00:09:49.813 "admin_qpairs": 0, 00:09:49.813 "io_qpairs": 0, 00:09:49.813 "current_admin_qpairs": 0, 00:09:49.813 "current_io_qpairs": 0, 00:09:49.813 "pending_bdev_io": 0, 00:09:49.813 "completed_nvme_io": 0, 00:09:49.813 "transports": [] 00:09:49.813 }, 00:09:49.813 { 00:09:49.813 "name": "nvmf_tgt_poll_group_002", 00:09:49.813 "admin_qpairs": 0, 00:09:49.813 "io_qpairs": 0, 00:09:49.813 "current_admin_qpairs": 0, 00:09:49.813 "current_io_qpairs": 0, 00:09:49.813 "pending_bdev_io": 0, 00:09:49.813 "completed_nvme_io": 0, 00:09:49.813 "transports": [] 00:09:49.813 }, 00:09:49.813 { 00:09:49.813 "name": "nvmf_tgt_poll_group_003", 00:09:49.813 "admin_qpairs": 0, 00:09:49.813 "io_qpairs": 0, 00:09:49.813 "current_admin_qpairs": 0, 00:09:49.813 "current_io_qpairs": 0, 00:09:49.813 "pending_bdev_io": 0, 00:09:49.813 "completed_nvme_io": 0, 00:09:49.813 "transports": [] 00:09:49.813 } 00:09:49.813 ] 00:09:49.813 }' 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:49.813 18:01:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:49.813 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:49.813 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:49.813 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:49.813 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:49.813 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.813 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.813 [2024-07-15 18:01:50.114471] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1348f90/0x134d480) succeed. 00:09:49.813 [2024-07-15 18:01:50.123795] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x134a5d0/0x138eb10) succeed. 00:09:50.071 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.071 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:50.071 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.071 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.071 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.071 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:50.071 "tick_rate": 2500000000, 00:09:50.071 "poll_groups": [ 00:09:50.071 { 00:09:50.071 "name": "nvmf_tgt_poll_group_000", 00:09:50.071 "admin_qpairs": 0, 00:09:50.071 "io_qpairs": 0, 00:09:50.071 "current_admin_qpairs": 0, 00:09:50.071 "current_io_qpairs": 0, 00:09:50.071 "pending_bdev_io": 0, 00:09:50.071 "completed_nvme_io": 0, 00:09:50.071 "transports": [ 00:09:50.071 { 00:09:50.071 "trtype": "RDMA", 00:09:50.071 "pending_data_buffer": 0, 00:09:50.071 "devices": [ 00:09:50.071 { 00:09:50.071 "name": "mlx5_0", 00:09:50.071 "polls": 15103, 00:09:50.071 "idle_polls": 15103, 00:09:50.071 "completions": 0, 00:09:50.071 "requests": 0, 00:09:50.071 "request_latency": 0, 00:09:50.071 "pending_free_request": 0, 00:09:50.071 "pending_rdma_read": 0, 00:09:50.071 "pending_rdma_write": 0, 00:09:50.071 "pending_rdma_send": 0, 00:09:50.071 "total_send_wrs": 0, 00:09:50.071 "send_doorbell_updates": 0, 00:09:50.071 "total_recv_wrs": 4096, 00:09:50.071 "recv_doorbell_updates": 1 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "mlx5_1", 00:09:50.072 "polls": 15103, 00:09:50.072 "idle_polls": 15103, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "nvmf_tgt_poll_group_001", 00:09:50.072 "admin_qpairs": 0, 00:09:50.072 "io_qpairs": 0, 00:09:50.072 "current_admin_qpairs": 0, 00:09:50.072 "current_io_qpairs": 0, 00:09:50.072 "pending_bdev_io": 0, 00:09:50.072 "completed_nvme_io": 0, 00:09:50.072 "transports": [ 00:09:50.072 { 00:09:50.072 "trtype": "RDMA", 00:09:50.072 "pending_data_buffer": 0, 00:09:50.072 "devices": [ 00:09:50.072 { 00:09:50.072 "name": "mlx5_0", 00:09:50.072 "polls": 9564, 00:09:50.072 "idle_polls": 9564, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "mlx5_1", 00:09:50.072 "polls": 9564, 00:09:50.072 "idle_polls": 9564, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "nvmf_tgt_poll_group_002", 00:09:50.072 "admin_qpairs": 0, 00:09:50.072 "io_qpairs": 0, 00:09:50.072 "current_admin_qpairs": 0, 00:09:50.072 "current_io_qpairs": 0, 00:09:50.072 "pending_bdev_io": 0, 00:09:50.072 "completed_nvme_io": 0, 00:09:50.072 "transports": [ 00:09:50.072 { 00:09:50.072 "trtype": "RDMA", 00:09:50.072 "pending_data_buffer": 0, 00:09:50.072 "devices": [ 00:09:50.072 { 00:09:50.072 "name": "mlx5_0", 00:09:50.072 "polls": 5289, 00:09:50.072 "idle_polls": 5289, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "mlx5_1", 00:09:50.072 "polls": 5289, 00:09:50.072 "idle_polls": 5289, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "nvmf_tgt_poll_group_003", 00:09:50.072 "admin_qpairs": 0, 00:09:50.072 "io_qpairs": 0, 00:09:50.072 "current_admin_qpairs": 0, 00:09:50.072 "current_io_qpairs": 0, 00:09:50.072 "pending_bdev_io": 0, 00:09:50.072 "completed_nvme_io": 0, 00:09:50.072 "transports": [ 00:09:50.072 { 00:09:50.072 "trtype": "RDMA", 00:09:50.072 "pending_data_buffer": 0, 00:09:50.072 "devices": [ 00:09:50.072 { 00:09:50.072 "name": "mlx5_0", 00:09:50.072 "polls": 848, 00:09:50.072 "idle_polls": 848, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 }, 00:09:50.072 { 00:09:50.072 "name": "mlx5_1", 00:09:50.072 "polls": 848, 00:09:50.072 "idle_polls": 848, 00:09:50.072 "completions": 0, 00:09:50.072 "requests": 0, 00:09:50.072 "request_latency": 0, 00:09:50.072 "pending_free_request": 0, 00:09:50.072 "pending_rdma_read": 0, 00:09:50.072 "pending_rdma_write": 0, 00:09:50.072 "pending_rdma_send": 0, 00:09:50.072 "total_send_wrs": 0, 00:09:50.072 "send_doorbell_updates": 0, 00:09:50.072 "total_recv_wrs": 4096, 00:09:50.072 "recv_doorbell_updates": 1 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 } 00:09:50.072 ] 00:09:50.072 }' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:50.072 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.331 Malloc1 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.331 [2024-07-15 18:01:50.540040] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:09:50.331 [2024-07-15 18:01:50.585971] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:09:50.331 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:50.331 could not add new controller: failed to write to nvme-fabrics device 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.331 18:01:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:51.269 18:01:51 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:51.269 18:01:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.269 18:01:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.269 18:01:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:51.269 18:01:51 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:53.805 18:01:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:54.372 [2024-07-15 18:01:54.667690] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:09:54.372 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:54.372 could not add new controller: failed to write to nvme-fabrics device 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.372 18:01:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:55.307 18:01:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.307 18:01:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.307 18:01:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.307 18:01:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:55.307 18:01:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:57.843 18:01:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.411 [2024-07-15 18:01:58.740276] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.411 18:01:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:59.347 18:01:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.347 18:01:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:59.347 18:01:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.347 18:01:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:59.347 18:01:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:01.961 18:02:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:02.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 [2024-07-15 18:02:02.798522] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.528 18:02:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:03.470 18:02:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.470 18:02:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:03.470 18:02:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.470 18:02:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:03.470 18:02:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:06.003 18:02:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.570 [2024-07-15 18:02:06.857973] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.570 18:02:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:07.505 18:02:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.505 18:02:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:07.505 18:02:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.505 18:02:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:07.505 18:02:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:10.037 18:02:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.603 [2024-07-15 18:02:10.928067] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.603 18:02:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:11.539 18:02:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.539 18:02:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.539 18:02:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.539 18:02:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:11.539 18:02:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:14.073 18:02:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.640 [2024-07-15 18:02:14.975443] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.640 18:02:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:15.576 18:02:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.576 18:02:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.576 18:02:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.576 18:02:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.576 18:02:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:18.142 18:02:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.711 18:02:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.711 18:02:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.711 18:02:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.711 18:02:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.711 18:02:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.711 18:02:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 [2024-07-15 18:02:19.055710] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.711 [2024-07-15 18:02:19.103860] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.711 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.971 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 [2024-07-15 18:02:19.156088] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 [2024-07-15 18:02:19.204229] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 [2024-07-15 18:02:19.252404] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.972 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:18.972 "tick_rate": 2500000000, 00:10:18.972 "poll_groups": [ 00:10:18.972 { 00:10:18.972 "name": "nvmf_tgt_poll_group_000", 00:10:18.972 "admin_qpairs": 2, 00:10:18.972 "io_qpairs": 27, 00:10:18.972 "current_admin_qpairs": 0, 00:10:18.972 "current_io_qpairs": 0, 00:10:18.972 "pending_bdev_io": 0, 00:10:18.972 "completed_nvme_io": 127, 00:10:18.972 "transports": [ 00:10:18.972 { 00:10:18.972 "trtype": "RDMA", 00:10:18.972 "pending_data_buffer": 0, 00:10:18.972 "devices": [ 00:10:18.972 { 00:10:18.972 "name": "mlx5_0", 00:10:18.972 "polls": 3456001, 00:10:18.972 "idle_polls": 3455677, 00:10:18.972 "completions": 365, 00:10:18.972 "requests": 182, 00:10:18.972 "request_latency": 35470400, 00:10:18.972 "pending_free_request": 0, 00:10:18.972 "pending_rdma_read": 0, 00:10:18.972 "pending_rdma_write": 0, 00:10:18.972 "pending_rdma_send": 0, 00:10:18.972 "total_send_wrs": 308, 00:10:18.972 "send_doorbell_updates": 161, 00:10:18.972 "total_recv_wrs": 4278, 00:10:18.972 "recv_doorbell_updates": 161 00:10:18.972 }, 00:10:18.972 { 00:10:18.972 "name": "mlx5_1", 00:10:18.972 "polls": 3456001, 00:10:18.972 "idle_polls": 3456001, 00:10:18.972 "completions": 0, 00:10:18.972 "requests": 0, 00:10:18.972 "request_latency": 0, 00:10:18.972 "pending_free_request": 0, 00:10:18.972 "pending_rdma_read": 0, 00:10:18.972 "pending_rdma_write": 0, 00:10:18.972 "pending_rdma_send": 0, 00:10:18.972 "total_send_wrs": 0, 00:10:18.972 "send_doorbell_updates": 0, 00:10:18.972 "total_recv_wrs": 4096, 00:10:18.972 "recv_doorbell_updates": 1 00:10:18.972 } 00:10:18.972 ] 00:10:18.972 } 00:10:18.972 ] 00:10:18.972 }, 00:10:18.972 { 00:10:18.972 "name": "nvmf_tgt_poll_group_001", 00:10:18.972 "admin_qpairs": 2, 00:10:18.972 "io_qpairs": 26, 00:10:18.972 "current_admin_qpairs": 0, 00:10:18.972 "current_io_qpairs": 0, 00:10:18.972 "pending_bdev_io": 0, 00:10:18.972 "completed_nvme_io": 125, 00:10:18.972 "transports": [ 00:10:18.972 { 00:10:18.972 "trtype": "RDMA", 00:10:18.972 "pending_data_buffer": 0, 00:10:18.972 "devices": [ 00:10:18.972 { 00:10:18.972 "name": "mlx5_0", 00:10:18.972 "polls": 3405857, 00:10:18.972 "idle_polls": 3405540, 00:10:18.972 "completions": 356, 00:10:18.972 "requests": 178, 00:10:18.972 "request_latency": 35466600, 00:10:18.972 "pending_free_request": 0, 00:10:18.972 "pending_rdma_read": 0, 00:10:18.972 "pending_rdma_write": 0, 00:10:18.972 "pending_rdma_send": 0, 00:10:18.972 "total_send_wrs": 302, 00:10:18.972 "send_doorbell_updates": 156, 00:10:18.972 "total_recv_wrs": 4274, 00:10:18.972 "recv_doorbell_updates": 157 00:10:18.972 }, 00:10:18.972 { 00:10:18.972 "name": "mlx5_1", 00:10:18.972 "polls": 3405857, 00:10:18.972 "idle_polls": 3405857, 00:10:18.972 "completions": 0, 00:10:18.972 "requests": 0, 00:10:18.972 "request_latency": 0, 00:10:18.972 "pending_free_request": 0, 00:10:18.972 "pending_rdma_read": 0, 00:10:18.972 "pending_rdma_write": 0, 00:10:18.972 "pending_rdma_send": 0, 00:10:18.972 "total_send_wrs": 0, 00:10:18.972 "send_doorbell_updates": 0, 00:10:18.972 "total_recv_wrs": 4096, 00:10:18.972 "recv_doorbell_updates": 1 00:10:18.972 } 00:10:18.972 ] 00:10:18.972 } 00:10:18.972 ] 00:10:18.972 }, 00:10:18.972 { 00:10:18.972 "name": "nvmf_tgt_poll_group_002", 00:10:18.972 "admin_qpairs": 1, 00:10:18.972 "io_qpairs": 26, 00:10:18.972 "current_admin_qpairs": 0, 00:10:18.972 "current_io_qpairs": 0, 00:10:18.972 "pending_bdev_io": 0, 00:10:18.972 "completed_nvme_io": 76, 00:10:18.972 "transports": [ 00:10:18.972 { 00:10:18.972 "trtype": "RDMA", 00:10:18.972 "pending_data_buffer": 0, 00:10:18.972 "devices": [ 00:10:18.972 { 00:10:18.973 "name": "mlx5_0", 00:10:18.973 "polls": 3549921, 00:10:18.973 "idle_polls": 3549733, 00:10:18.973 "completions": 207, 00:10:18.973 "requests": 103, 00:10:18.973 "request_latency": 19509340, 00:10:18.973 "pending_free_request": 0, 00:10:18.973 "pending_rdma_read": 0, 00:10:18.973 "pending_rdma_write": 0, 00:10:18.973 "pending_rdma_send": 0, 00:10:18.973 "total_send_wrs": 166, 00:10:18.973 "send_doorbell_updates": 94, 00:10:18.973 "total_recv_wrs": 4199, 00:10:18.973 "recv_doorbell_updates": 94 00:10:18.973 }, 00:10:18.973 { 00:10:18.973 "name": "mlx5_1", 00:10:18.973 "polls": 3549921, 00:10:18.973 "idle_polls": 3549921, 00:10:18.973 "completions": 0, 00:10:18.973 "requests": 0, 00:10:18.973 "request_latency": 0, 00:10:18.973 "pending_free_request": 0, 00:10:18.973 "pending_rdma_read": 0, 00:10:18.973 "pending_rdma_write": 0, 00:10:18.973 "pending_rdma_send": 0, 00:10:18.973 "total_send_wrs": 0, 00:10:18.973 "send_doorbell_updates": 0, 00:10:18.973 "total_recv_wrs": 4096, 00:10:18.973 "recv_doorbell_updates": 1 00:10:18.973 } 00:10:18.973 ] 00:10:18.973 } 00:10:18.973 ] 00:10:18.973 }, 00:10:18.973 { 00:10:18.973 "name": "nvmf_tgt_poll_group_003", 00:10:18.973 "admin_qpairs": 2, 00:10:18.973 "io_qpairs": 26, 00:10:18.973 "current_admin_qpairs": 0, 00:10:18.973 "current_io_qpairs": 0, 00:10:18.973 "pending_bdev_io": 0, 00:10:18.973 "completed_nvme_io": 127, 00:10:18.973 "transports": [ 00:10:18.973 { 00:10:18.973 "trtype": "RDMA", 00:10:18.973 "pending_data_buffer": 0, 00:10:18.973 "devices": [ 00:10:18.973 { 00:10:18.973 "name": "mlx5_0", 00:10:18.973 "polls": 2694190, 00:10:18.973 "idle_polls": 2693873, 00:10:18.973 "completions": 360, 00:10:18.973 "requests": 180, 00:10:18.973 "request_latency": 37802992, 00:10:18.973 "pending_free_request": 0, 00:10:18.973 "pending_rdma_read": 0, 00:10:18.973 "pending_rdma_write": 0, 00:10:18.973 "pending_rdma_send": 0, 00:10:18.973 "total_send_wrs": 306, 00:10:18.973 "send_doorbell_updates": 155, 00:10:18.973 "total_recv_wrs": 4276, 00:10:18.973 "recv_doorbell_updates": 156 00:10:18.973 }, 00:10:18.973 { 00:10:18.973 "name": "mlx5_1", 00:10:18.973 "polls": 2694190, 00:10:18.973 "idle_polls": 2694190, 00:10:18.973 "completions": 0, 00:10:18.973 "requests": 0, 00:10:18.973 "request_latency": 0, 00:10:18.973 "pending_free_request": 0, 00:10:18.973 "pending_rdma_read": 0, 00:10:18.973 "pending_rdma_write": 0, 00:10:18.973 "pending_rdma_send": 0, 00:10:18.973 "total_send_wrs": 0, 00:10:18.973 "send_doorbell_updates": 0, 00:10:18.973 "total_recv_wrs": 4096, 00:10:18.973 "recv_doorbell_updates": 1 00:10:18.973 } 00:10:18.973 ] 00:10:18.973 } 00:10:18.973 ] 00:10:18.973 } 00:10:18.973 ] 00:10:18.973 }' 00:10:18.973 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:18.973 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:18.973 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:18.973 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 128249332 > 0 )) 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:19.232 rmmod nvme_rdma 00:10:19.232 rmmod nvme_fabrics 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1531564 ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1531564 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1531564 ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1531564 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1531564 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1531564' 00:10:19.232 killing process with pid 1531564 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1531564 00:10:19.232 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1531564 00:10:19.800 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.800 18:02:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:19.800 00:10:19.800 real 0m39.424s 00:10:19.800 user 2m4.582s 00:10:19.800 sys 0m8.235s 00:10:19.800 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.800 18:02:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:19.800 ************************************ 00:10:19.800 END TEST nvmf_rpc 00:10:19.800 ************************************ 00:10:19.800 18:02:19 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:19.800 18:02:19 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:19.800 18:02:19 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.800 18:02:19 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.800 18:02:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:19.800 ************************************ 00:10:19.800 START TEST nvmf_invalid 00:10:19.800 ************************************ 00:10:19.800 18:02:19 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:19.800 * Looking for test storage... 00:10:19.800 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:19.800 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.801 18:02:20 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.921 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:27.922 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:27.922 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:27.922 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:27.922 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:27.922 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:27.922 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:27.922 altname enp217s0f0np0 00:10:27.922 altname ens818f0np0 00:10:27.922 inet 192.168.100.8/24 scope global mlx_0_0 00:10:27.922 valid_lft forever preferred_lft forever 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:27.922 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:27.922 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:27.922 altname enp217s0f1np1 00:10:27.922 altname ens818f1np1 00:10:27.922 inet 192.168.100.9/24 scope global mlx_0_1 00:10:27.922 valid_lft forever preferred_lft forever 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:27.922 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:27.923 192.168.100.9' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:27.923 192.168.100.9' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:27.923 192.168.100.9' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1540872 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1540872 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1540872 ']' 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.923 18:02:27 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:27.923 [2024-07-15 18:02:27.935069] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:10:27.923 [2024-07-15 18:02:27.935128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.923 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.923 [2024-07-15 18:02:28.018828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.923 [2024-07-15 18:02:28.094509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.923 [2024-07-15 18:02:28.094544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.923 [2024-07-15 18:02:28.094554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.923 [2024-07-15 18:02:28.094562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.923 [2024-07-15 18:02:28.094569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.923 [2024-07-15 18:02:28.094614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.923 [2024-07-15 18:02:28.094710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.923 [2024-07-15 18:02:28.094730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.923 [2024-07-15 18:02:28.094732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:28.490 18:02:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode528 00:10:28.750 [2024-07-15 18:02:28.948252] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:28.750 18:02:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:28.750 { 00:10:28.750 "nqn": "nqn.2016-06.io.spdk:cnode528", 00:10:28.750 "tgt_name": "foobar", 00:10:28.750 "method": "nvmf_create_subsystem", 00:10:28.750 "req_id": 1 00:10:28.750 } 00:10:28.750 Got JSON-RPC error response 00:10:28.750 response: 00:10:28.750 { 00:10:28.750 "code": -32603, 00:10:28.750 "message": "Unable to find target foobar" 00:10:28.750 }' 00:10:28.750 18:02:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:28.750 { 00:10:28.750 "nqn": "nqn.2016-06.io.spdk:cnode528", 00:10:28.750 "tgt_name": "foobar", 00:10:28.750 "method": "nvmf_create_subsystem", 00:10:28.750 "req_id": 1 00:10:28.750 } 00:10:28.750 Got JSON-RPC error response 00:10:28.750 response: 00:10:28.750 { 00:10:28.750 "code": -32603, 00:10:28.750 "message": "Unable to find target foobar" 00:10:28.750 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:28.750 18:02:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:28.750 18:02:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2439 00:10:28.750 [2024-07-15 18:02:29.140967] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2439: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:29.009 { 00:10:29.009 "nqn": "nqn.2016-06.io.spdk:cnode2439", 00:10:29.009 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:29.009 "method": "nvmf_create_subsystem", 00:10:29.009 "req_id": 1 00:10:29.009 } 00:10:29.009 Got JSON-RPC error response 00:10:29.009 response: 00:10:29.009 { 00:10:29.009 "code": -32602, 00:10:29.009 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:29.009 }' 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:29.009 { 00:10:29.009 "nqn": "nqn.2016-06.io.spdk:cnode2439", 00:10:29.009 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:29.009 "method": "nvmf_create_subsystem", 00:10:29.009 "req_id": 1 00:10:29.009 } 00:10:29.009 Got JSON-RPC error response 00:10:29.009 response: 00:10:29.009 { 00:10:29.009 "code": -32602, 00:10:29.009 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:29.009 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7519 00:10:29.009 [2024-07-15 18:02:29.333557] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7519: invalid model number 'SPDK_Controller' 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:29.009 { 00:10:29.009 "nqn": "nqn.2016-06.io.spdk:cnode7519", 00:10:29.009 "model_number": "SPDK_Controller\u001f", 00:10:29.009 "method": "nvmf_create_subsystem", 00:10:29.009 "req_id": 1 00:10:29.009 } 00:10:29.009 Got JSON-RPC error response 00:10:29.009 response: 00:10:29.009 { 00:10:29.009 "code": -32602, 00:10:29.009 "message": "Invalid MN SPDK_Controller\u001f" 00:10:29.009 }' 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:29.009 { 00:10:29.009 "nqn": "nqn.2016-06.io.spdk:cnode7519", 00:10:29.009 "model_number": "SPDK_Controller\u001f", 00:10:29.009 "method": "nvmf_create_subsystem", 00:10:29.009 "req_id": 1 00:10:29.009 } 00:10:29.009 Got JSON-RPC error response 00:10:29.009 response: 00:10:29.009 { 00:10:29.009 "code": -32602, 00:10:29.009 "message": "Invalid MN SPDK_Controller\u001f" 00:10:29.009 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:29.009 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.010 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'yqyxAa80%*OuO/zZBW31' 00:10:29.269 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'yqyxAa80%*OuO/zZBW31' nqn.2016-06.io.spdk:cnode29759 00:10:29.529 [2024-07-15 18:02:29.686739] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29759: invalid serial number 'yqyxAa80%*OuO/zZBW31' 00:10:29.529 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:29.529 { 00:10:29.529 "nqn": "nqn.2016-06.io.spdk:cnode29759", 00:10:29.529 "serial_number": "yqyxAa80%*OuO\u007f/zZBW31", 00:10:29.529 "method": "nvmf_create_subsystem", 00:10:29.529 "req_id": 1 00:10:29.529 } 00:10:29.529 Got JSON-RPC error response 00:10:29.529 response: 00:10:29.529 { 00:10:29.529 "code": -32602, 00:10:29.529 "message": "Invalid SN yqyxAa80%*OuO\u007f/zZBW31" 00:10:29.529 }' 00:10:29.529 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:29.529 { 00:10:29.529 "nqn": "nqn.2016-06.io.spdk:cnode29759", 00:10:29.529 "serial_number": "yqyxAa80%*OuO\u007f/zZBW31", 00:10:29.529 "method": "nvmf_create_subsystem", 00:10:29.529 "req_id": 1 00:10:29.529 } 00:10:29.529 Got JSON-RPC error response 00:10:29.529 response: 00:10:29.529 { 00:10:29.529 "code": -32602, 00:10:29.529 "message": "Invalid SN yqyxAa80%*OuO\u007f/zZBW31" 00:10:29.529 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:29.529 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.530 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:10:29.790 18:02:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nBXIb8L1Ha_\iG-Xw>O{~;3:6KO{~;3:6KO{~;3:6KO{~;3:6KO{~;3:6KO{~;3:6KO{~;3:6K /dev/null' 00:10:32.414 18:02:32 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.414 18:02:32 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.414 18:02:32 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.414 18:02:32 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.414 18:02:32 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.531 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.531 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:40.532 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:40.532 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:40.532 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:40.532 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:40.532 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:40.532 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:40.532 altname enp217s0f0np0 00:10:40.532 altname ens818f0np0 00:10:40.532 inet 192.168.100.8/24 scope global mlx_0_0 00:10:40.532 valid_lft forever preferred_lft forever 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:40.532 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:40.533 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:40.533 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:40.533 altname enp217s0f1np1 00:10:40.533 altname ens818f1np1 00:10:40.533 inet 192.168.100.9/24 scope global mlx_0_1 00:10:40.533 valid_lft forever preferred_lft forever 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:40.533 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:40.793 192.168.100.9' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:40.793 192.168.100.9' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:40.793 192.168.100.9' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1545947 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1545947 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1545947 ']' 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.793 18:02:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:40.793 [2024-07-15 18:02:41.036335] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:10:40.793 [2024-07-15 18:02:41.036383] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.793 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.793 [2024-07-15 18:02:41.119490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.793 [2024-07-15 18:02:41.192099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.793 [2024-07-15 18:02:41.192139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.793 [2024-07-15 18:02:41.192148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.793 [2024-07-15 18:02:41.192161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.793 [2024-07-15 18:02:41.192168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.793 [2024-07-15 18:02:41.192281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.793 [2024-07-15 18:02:41.192364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.793 [2024-07-15 18:02:41.192367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:41 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 [2024-07-15 18:02:41.922432] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1650500/0x16549f0) succeed. 00:10:41.809 [2024-07-15 18:02:41.931419] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1651aa0/0x1696080) succeed. 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 Malloc0 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 Delay0 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 [2024-07-15 18:02:42.087755] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.809 18:02:42 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:41.809 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.809 [2024-07-15 18:02:42.182605] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.339 Initializing NVMe Controllers 00:10:44.339 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:44.339 controller IO queue size 128 less than required 00:10:44.339 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:44.339 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:44.339 Initialization complete. Launching workers. 00:10:44.339 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51157 00:10:44.339 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51218, failed to submit 62 00:10:44.339 success 51158, unsuccess 60, failed 0 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:44.339 rmmod nvme_rdma 00:10:44.339 rmmod nvme_fabrics 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1545947 ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1545947 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1545947 ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1545947 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1545947 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1545947' 00:10:44.339 killing process with pid 1545947 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1545947 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1545947 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:44.339 00:10:44.339 real 0m12.233s 00:10:44.339 user 0m14.855s 00:10:44.339 sys 0m6.946s 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.339 18:02:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:44.339 ************************************ 00:10:44.339 END TEST nvmf_abort 00:10:44.339 ************************************ 00:10:44.339 18:02:44 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:44.339 18:02:44 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:44.339 18:02:44 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.339 18:02:44 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.339 18:02:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:44.597 ************************************ 00:10:44.597 START TEST nvmf_ns_hotplug_stress 00:10:44.597 ************************************ 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:44.597 * Looking for test storage... 00:10:44.597 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.597 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.598 18:02:44 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:52.721 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:52.721 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:52.721 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:52.722 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:52.722 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:52.722 18:02:52 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:52.722 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.722 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:52.722 altname enp217s0f0np0 00:10:52.722 altname ens818f0np0 00:10:52.722 inet 192.168.100.8/24 scope global mlx_0_0 00:10:52.722 valid_lft forever preferred_lft forever 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:52.722 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.722 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:52.722 altname enp217s0f1np1 00:10:52.722 altname ens818f1np1 00:10:52.722 inet 192.168.100.9/24 scope global mlx_0_1 00:10:52.722 valid_lft forever preferred_lft forever 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:52.722 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.003 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.004 192.168.100.9' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:53.004 192.168.100.9' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:53.004 192.168.100.9' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1550548 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1550548 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1550548 ']' 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.004 18:02:53 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:53.004 [2024-07-15 18:02:53.264689] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:10:53.004 [2024-07-15 18:02:53.264753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.004 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.004 [2024-07-15 18:02:53.347726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.271 [2024-07-15 18:02:53.422814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.271 [2024-07-15 18:02:53.422851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.271 [2024-07-15 18:02:53.422861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.271 [2024-07-15 18:02:53.422870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.271 [2024-07-15 18:02:53.422878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.271 [2024-07-15 18:02:53.422925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.271 [2024-07-15 18:02:53.423008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.271 [2024-07-15 18:02:53.423010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:53.838 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:54.097 [2024-07-15 18:02:54.288564] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20f4500/0x20f89f0) succeed. 00:10:54.097 [2024-07-15 18:02:54.297703] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20f5aa0/0x213a080) succeed. 00:10:54.097 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:54.356 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.616 [2024-07-15 18:02:54.772246] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.616 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:54.616 18:02:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:54.874 Malloc0 00:10:54.874 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:55.132 Delay0 00:10:55.132 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.391 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:55.391 NULL1 00:10:55.391 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:55.649 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:55.649 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1551045 00:10:55.649 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:10:55.649 18:02:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.649 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.027 Read completed with error (sct=0, sc=11) 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 18:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.028 18:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:57.028 18:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:57.028 true 00:10:57.028 18:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:10:57.028 18:02:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.964 18:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.224 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.224 18:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:58.224 18:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:58.224 true 00:10:58.224 18:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:10:58.224 18:02:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.161 18:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.421 18:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:59.421 18:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:59.421 true 00:10:59.421 18:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:10:59.421 18:02:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.356 18:03:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.614 18:03:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:00.614 18:03:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:00.614 true 00:11:00.614 18:03:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:00.614 18:03:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.547 18:03:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.547 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.838 18:03:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:01.838 18:03:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:01.838 true 00:11:01.838 18:03:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:01.838 18:03:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.773 18:03:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.773 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.032 18:03:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:03.032 18:03:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:03.032 true 00:11:03.032 18:03:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:03.032 18:03:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.980 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.981 18:03:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.241 18:03:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:04.241 18:03:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:04.241 true 00:11:04.241 18:03:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:04.241 18:03:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.190 18:03:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.463 18:03:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:05.463 18:03:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:05.463 true 00:11:05.463 18:03:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:05.463 18:03:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.399 18:03:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.658 18:03:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:06.658 18:03:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:06.658 true 00:11:06.658 18:03:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:06.658 18:03:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.595 18:03:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.595 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.854 18:03:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:07.854 18:03:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:07.854 true 00:11:07.854 18:03:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:07.855 18:03:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.791 18:03:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.049 18:03:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:09.049 18:03:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:09.049 true 00:11:09.049 18:03:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:09.049 18:03:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:09.983 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.257 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:10.257 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:10.257 true 00:11:10.257 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:10.257 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.517 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.776 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:10.776 18:03:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:10.776 true 00:11:10.776 18:03:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:10.776 18:03:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 18:03:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:12.151 18:03:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:12.151 18:03:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:12.409 true 00:11:12.409 18:03:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:12.409 18:03:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 18:03:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.347 18:03:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:13.347 18:03:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:13.606 true 00:11:13.606 18:03:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:13.606 18:03:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 18:03:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.543 18:03:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:14.543 18:03:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:14.801 true 00:11:14.801 18:03:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:14.801 18:03:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 18:03:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.737 18:03:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:15.737 18:03:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:15.994 true 00:11:15.994 18:03:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:15.994 18:03:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 18:03:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.970 18:03:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:16.970 18:03:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:16.970 true 00:11:16.970 18:03:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:16.970 18:03:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.905 18:03:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:18.163 18:03:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:18.163 18:03:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:18.163 true 00:11:18.163 18:03:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:18.163 18:03:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.100 18:03:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:19.359 18:03:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:19.359 18:03:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:19.618 true 00:11:19.618 18:03:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:19.618 18:03:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 18:03:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:20.554 18:03:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:20.554 18:03:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:20.813 true 00:11:20.813 18:03:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:20.813 18:03:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 18:03:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:21.749 18:03:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:21.749 18:03:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:22.008 true 00:11:22.008 18:03:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:22.008 18:03:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 18:03:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.945 18:03:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:22.945 18:03:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:23.204 true 00:11:23.204 18:03:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:23.204 18:03:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.139 18:03:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.139 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.140 18:03:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:24.140 18:03:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:24.398 true 00:11:24.398 18:03:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:24.398 18:03:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 18:03:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:25.334 18:03:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:25.334 18:03:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:25.592 true 00:11:25.592 18:03:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:25.592 18:03:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.528 18:03:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.528 18:03:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:26.528 18:03:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:26.787 true 00:11:26.787 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:26.787 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.049 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.308 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:27.308 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:27.308 true 00:11:27.308 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:27.308 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.567 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.826 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:27.826 18:03:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:27.826 true 00:11:27.826 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:27.826 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.085 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.345 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:28.345 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:28.345 true 00:11:28.345 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:28.345 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.345 Initializing NVMe Controllers 00:11:28.345 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:28.345 Controller IO queue size 128, less than required. 00:11:28.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:28.345 Controller IO queue size 128, less than required. 00:11:28.345 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:28.345 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:28.345 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:28.345 Initialization complete. Launching workers. 00:11:28.345 ======================================================== 00:11:28.345 Latency(us) 00:11:28.345 Device Information : IOPS MiB/s Average min max 00:11:28.345 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5541.80 2.71 20599.88 859.19 1134487.02 00:11:28.345 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34410.80 16.80 3719.65 1844.06 286857.59 00:11:28.345 ======================================================== 00:11:28.345 Total : 39952.60 19.51 6061.10 859.19 1134487.02 00:11:28.345 00:11:28.603 18:03:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.862 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:28.862 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:28.862 true 00:11:28.862 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1551045 00:11:28.862 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1551045) - No such process 00:11:28.862 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1551045 00:11:28.862 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.120 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:29.379 null0 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.379 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:29.637 null1 00:11:29.637 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.637 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.637 18:03:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:29.895 null2 00:11:29.895 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:29.895 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:29.895 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:30.153 null3 00:11:30.153 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:30.153 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:30.153 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:30.153 null4 00:11:30.153 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:30.153 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:30.153 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:30.410 null5 00:11:30.410 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:30.410 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:30.410 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:30.672 null6 00:11:30.672 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:30.672 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:30.672 18:03:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:30.672 null7 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.672 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1557780 1557781 1557783 1557785 1557787 1557789 1557790 1557792 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:30.987 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.245 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.245 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:31.246 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:31.505 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.506 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:31.506 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:31.506 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:31.763 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.763 18:03:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:31.763 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:31.763 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:31.763 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:31.763 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.763 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:31.763 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.021 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.022 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.280 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:32.538 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:32.538 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:32.538 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.539 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:32.797 18:03:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:32.797 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:33.054 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.312 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.571 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.830 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:33.830 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:34.089 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.090 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.349 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.608 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:34.609 rmmod nvme_rdma 00:11:34.609 rmmod nvme_fabrics 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1550548 ']' 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1550548 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1550548 ']' 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1550548 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1550548 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1550548' 00:11:34.609 killing process with pid 1550548 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1550548 00:11:34.609 18:03:34 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1550548 00:11:34.868 18:03:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.868 18:03:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:34.868 00:11:34.868 real 0m50.425s 00:11:34.868 user 3m18.592s 00:11:34.868 sys 0m15.736s 00:11:34.868 18:03:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.868 18:03:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.868 ************************************ 00:11:34.868 END TEST nvmf_ns_hotplug_stress 00:11:34.868 ************************************ 00:11:34.868 18:03:35 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:34.868 18:03:35 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:34.868 18:03:35 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:34.868 18:03:35 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.868 18:03:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:34.868 ************************************ 00:11:34.868 START TEST nvmf_connect_stress 00:11:34.868 ************************************ 00:11:34.868 18:03:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:35.127 * Looking for test storage... 00:11:35.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.127 18:03:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:43.252 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:43.252 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:43.252 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:43.253 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:43.253 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:43.253 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.253 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:43.253 altname enp217s0f0np0 00:11:43.253 altname ens818f0np0 00:11:43.253 inet 192.168.100.8/24 scope global mlx_0_0 00:11:43.253 valid_lft forever preferred_lft forever 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:43.253 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:43.253 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:43.253 altname enp217s0f1np1 00:11:43.253 altname ens818f1np1 00:11:43.253 inet 192.168.100.9/24 scope global mlx_0_1 00:11:43.253 valid_lft forever preferred_lft forever 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:43.253 192.168.100.9' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:43.253 192.168.100.9' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:43.253 192.168.100.9' 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:11:43.253 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1562645 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1562645 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1562645 ']' 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.254 18:03:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.254 [2024-07-15 18:03:43.618658] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:11:43.254 [2024-07-15 18:03:43.618709] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.512 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.512 [2024-07-15 18:03:43.703567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:43.512 [2024-07-15 18:03:43.775230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.512 [2024-07-15 18:03:43.775273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.512 [2024-07-15 18:03:43.775283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.512 [2024-07-15 18:03:43.775291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.512 [2024-07-15 18:03:43.775298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.512 [2024-07-15 18:03:43.775401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.512 [2024-07-15 18:03:43.775487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.512 [2024-07-15 18:03:43.775489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.079 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.338 [2024-07-15 18:03:44.501278] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9f4500/0x9f89f0) succeed. 00:11:44.338 [2024-07-15 18:03:44.510315] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9f5aa0/0xa3a080) succeed. 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.338 [2024-07-15 18:03:44.628164] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.338 NULL1 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1562924 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.338 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.596 18:03:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.855 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.855 18:03:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:44.855 18:03:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.855 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.855 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.113 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.113 18:03:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:45.113 18:03:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.113 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.113 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.371 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.371 18:03:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:45.371 18:03:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.371 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.371 18:03:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.938 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.938 18:03:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:45.938 18:03:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:45.938 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.938 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.197 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.197 18:03:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:46.197 18:03:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.197 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.197 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.456 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.456 18:03:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:46.456 18:03:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.456 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.457 18:03:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.715 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.715 18:03:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:46.715 18:03:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.715 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.715 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.974 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.974 18:03:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:46.974 18:03:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:46.975 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.975 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.543 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.543 18:03:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:47.543 18:03:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.543 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.543 18:03:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:47.802 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.802 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:47.802 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:47.802 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.802 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.061 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.061 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:48.061 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.061 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.061 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.320 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.320 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:48.320 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.320 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.320 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.613 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.902 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:48.902 18:03:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.902 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.902 18:03:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.161 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.161 18:03:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:49.161 18:03:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.161 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.161 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.420 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.420 18:03:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:49.420 18:03:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.420 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.420 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.679 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.679 18:03:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:49.679 18:03:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.679 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.679 18:03:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.939 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.939 18:03:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:49.939 18:03:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.939 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.939 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.506 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.506 18:03:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:50.506 18:03:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.506 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.506 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.765 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.765 18:03:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:50.765 18:03:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.765 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.765 18:03:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.023 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.023 18:03:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:51.023 18:03:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.024 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.024 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.281 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.281 18:03:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:51.281 18:03:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.281 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.281 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.539 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.539 18:03:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:51.539 18:03:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.539 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.539 18:03:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.106 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.106 18:03:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:52.106 18:03:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.106 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.106 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.365 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.365 18:03:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:52.365 18:03:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.365 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.365 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.623 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.623 18:03:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:52.623 18:03:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.623 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.623 18:03:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.882 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.882 18:03:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:52.882 18:03:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.882 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.882 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.449 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.449 18:03:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:53.449 18:03:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.449 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.449 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.709 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.709 18:03:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:53.709 18:03:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.709 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.709 18:03:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.967 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.967 18:03:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:53.967 18:03:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.967 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.967 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.226 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.226 18:03:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:54.226 18:03:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.226 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.226 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.486 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.486 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.486 18:03:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:54.486 18:03:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.486 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.486 18:03:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1562924 00:11:55.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1562924) - No such process 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1562924 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:55.054 rmmod nvme_rdma 00:11:55.054 rmmod nvme_fabrics 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1562645 ']' 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1562645 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1562645 ']' 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1562645 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1562645 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1562645' 00:11:55.054 killing process with pid 1562645 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1562645 00:11:55.054 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1562645 00:11:55.313 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.313 18:03:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:55.313 00:11:55.313 real 0m20.293s 00:11:55.313 user 0m42.912s 00:11:55.313 sys 0m8.825s 00:11:55.313 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.313 18:03:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.313 ************************************ 00:11:55.313 END TEST nvmf_connect_stress 00:11:55.313 ************************************ 00:11:55.313 18:03:55 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:55.313 18:03:55 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:55.313 18:03:55 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.313 18:03:55 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.313 18:03:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:55.313 ************************************ 00:11:55.313 START TEST nvmf_fused_ordering 00:11:55.313 ************************************ 00:11:55.313 18:03:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:55.572 * Looking for test storage... 00:11:55.572 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.572 18:03:55 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:03.696 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:03.696 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:03.696 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:03.696 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:03.696 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:03.697 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.697 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:03.697 altname enp217s0f0np0 00:12:03.697 altname ens818f0np0 00:12:03.697 inet 192.168.100.8/24 scope global mlx_0_0 00:12:03.697 valid_lft forever preferred_lft forever 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:03.697 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.697 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:03.697 altname enp217s0f1np1 00:12:03.697 altname ens818f1np1 00:12:03.697 inet 192.168.100.9/24 scope global mlx_0_1 00:12:03.697 valid_lft forever preferred_lft forever 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:03.697 192.168.100.9' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:03.697 192.168.100.9' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:03.697 192.168.100.9' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1568723 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1568723 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1568723 ']' 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.697 18:04:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:03.697 [2024-07-15 18:04:03.679433] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:12:03.697 [2024-07-15 18:04:03.679483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.697 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.697 [2024-07-15 18:04:03.761692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.697 [2024-07-15 18:04:03.834317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.697 [2024-07-15 18:04:03.834356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.697 [2024-07-15 18:04:03.834365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.697 [2024-07-15 18:04:03.834373] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.697 [2024-07-15 18:04:03.834380] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.697 [2024-07-15 18:04:03.834399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.265 [2024-07-15 18:04:04.534175] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ac4e20/0x1ac9310) succeed. 00:12:04.265 [2024-07-15 18:04:04.543291] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ac6320/0x1b0a9a0) succeed. 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.265 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.266 [2024-07-15 18:04:04.601789] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.266 NULL1 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.266 18:04:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:04.266 [2024-07-15 18:04:04.645642] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:12:04.266 [2024-07-15 18:04:04.645670] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1568841 ] 00:12:04.525 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.525 Attached to nqn.2016-06.io.spdk:cnode1 00:12:04.525 Namespace ID: 1 size: 1GB 00:12:04.525 fused_ordering(0) 00:12:04.525 fused_ordering(1) 00:12:04.525 fused_ordering(2) 00:12:04.525 fused_ordering(3) 00:12:04.525 fused_ordering(4) 00:12:04.525 fused_ordering(5) 00:12:04.525 fused_ordering(6) 00:12:04.525 fused_ordering(7) 00:12:04.525 fused_ordering(8) 00:12:04.525 fused_ordering(9) 00:12:04.525 fused_ordering(10) 00:12:04.525 fused_ordering(11) 00:12:04.525 fused_ordering(12) 00:12:04.525 fused_ordering(13) 00:12:04.525 fused_ordering(14) 00:12:04.525 fused_ordering(15) 00:12:04.525 fused_ordering(16) 00:12:04.525 fused_ordering(17) 00:12:04.525 fused_ordering(18) 00:12:04.525 fused_ordering(19) 00:12:04.525 fused_ordering(20) 00:12:04.525 fused_ordering(21) 00:12:04.525 fused_ordering(22) 00:12:04.525 fused_ordering(23) 00:12:04.525 fused_ordering(24) 00:12:04.525 fused_ordering(25) 00:12:04.525 fused_ordering(26) 00:12:04.525 fused_ordering(27) 00:12:04.525 fused_ordering(28) 00:12:04.525 fused_ordering(29) 00:12:04.525 fused_ordering(30) 00:12:04.525 fused_ordering(31) 00:12:04.525 fused_ordering(32) 00:12:04.525 fused_ordering(33) 00:12:04.525 fused_ordering(34) 00:12:04.525 fused_ordering(35) 00:12:04.525 fused_ordering(36) 00:12:04.525 fused_ordering(37) 00:12:04.525 fused_ordering(38) 00:12:04.525 fused_ordering(39) 00:12:04.525 fused_ordering(40) 00:12:04.525 fused_ordering(41) 00:12:04.525 fused_ordering(42) 00:12:04.525 fused_ordering(43) 00:12:04.525 fused_ordering(44) 00:12:04.525 fused_ordering(45) 00:12:04.525 fused_ordering(46) 00:12:04.525 fused_ordering(47) 00:12:04.525 fused_ordering(48) 00:12:04.525 fused_ordering(49) 00:12:04.525 fused_ordering(50) 00:12:04.525 fused_ordering(51) 00:12:04.525 fused_ordering(52) 00:12:04.525 fused_ordering(53) 00:12:04.525 fused_ordering(54) 00:12:04.525 fused_ordering(55) 00:12:04.525 fused_ordering(56) 00:12:04.525 fused_ordering(57) 00:12:04.525 fused_ordering(58) 00:12:04.525 fused_ordering(59) 00:12:04.525 fused_ordering(60) 00:12:04.525 fused_ordering(61) 00:12:04.525 fused_ordering(62) 00:12:04.525 fused_ordering(63) 00:12:04.525 fused_ordering(64) 00:12:04.525 fused_ordering(65) 00:12:04.525 fused_ordering(66) 00:12:04.525 fused_ordering(67) 00:12:04.525 fused_ordering(68) 00:12:04.525 fused_ordering(69) 00:12:04.525 fused_ordering(70) 00:12:04.525 fused_ordering(71) 00:12:04.525 fused_ordering(72) 00:12:04.525 fused_ordering(73) 00:12:04.525 fused_ordering(74) 00:12:04.525 fused_ordering(75) 00:12:04.525 fused_ordering(76) 00:12:04.525 fused_ordering(77) 00:12:04.525 fused_ordering(78) 00:12:04.525 fused_ordering(79) 00:12:04.525 fused_ordering(80) 00:12:04.525 fused_ordering(81) 00:12:04.525 fused_ordering(82) 00:12:04.525 fused_ordering(83) 00:12:04.525 fused_ordering(84) 00:12:04.525 fused_ordering(85) 00:12:04.525 fused_ordering(86) 00:12:04.525 fused_ordering(87) 00:12:04.525 fused_ordering(88) 00:12:04.525 fused_ordering(89) 00:12:04.525 fused_ordering(90) 00:12:04.525 fused_ordering(91) 00:12:04.525 fused_ordering(92) 00:12:04.525 fused_ordering(93) 00:12:04.525 fused_ordering(94) 00:12:04.525 fused_ordering(95) 00:12:04.525 fused_ordering(96) 00:12:04.525 fused_ordering(97) 00:12:04.525 fused_ordering(98) 00:12:04.525 fused_ordering(99) 00:12:04.525 fused_ordering(100) 00:12:04.525 fused_ordering(101) 00:12:04.525 fused_ordering(102) 00:12:04.525 fused_ordering(103) 00:12:04.525 fused_ordering(104) 00:12:04.525 fused_ordering(105) 00:12:04.525 fused_ordering(106) 00:12:04.525 fused_ordering(107) 00:12:04.525 fused_ordering(108) 00:12:04.525 fused_ordering(109) 00:12:04.525 fused_ordering(110) 00:12:04.525 fused_ordering(111) 00:12:04.525 fused_ordering(112) 00:12:04.525 fused_ordering(113) 00:12:04.525 fused_ordering(114) 00:12:04.525 fused_ordering(115) 00:12:04.525 fused_ordering(116) 00:12:04.525 fused_ordering(117) 00:12:04.525 fused_ordering(118) 00:12:04.525 fused_ordering(119) 00:12:04.525 fused_ordering(120) 00:12:04.525 fused_ordering(121) 00:12:04.525 fused_ordering(122) 00:12:04.525 fused_ordering(123) 00:12:04.525 fused_ordering(124) 00:12:04.525 fused_ordering(125) 00:12:04.525 fused_ordering(126) 00:12:04.525 fused_ordering(127) 00:12:04.525 fused_ordering(128) 00:12:04.525 fused_ordering(129) 00:12:04.525 fused_ordering(130) 00:12:04.525 fused_ordering(131) 00:12:04.525 fused_ordering(132) 00:12:04.525 fused_ordering(133) 00:12:04.525 fused_ordering(134) 00:12:04.525 fused_ordering(135) 00:12:04.525 fused_ordering(136) 00:12:04.525 fused_ordering(137) 00:12:04.525 fused_ordering(138) 00:12:04.525 fused_ordering(139) 00:12:04.525 fused_ordering(140) 00:12:04.525 fused_ordering(141) 00:12:04.525 fused_ordering(142) 00:12:04.525 fused_ordering(143) 00:12:04.525 fused_ordering(144) 00:12:04.525 fused_ordering(145) 00:12:04.525 fused_ordering(146) 00:12:04.526 fused_ordering(147) 00:12:04.526 fused_ordering(148) 00:12:04.526 fused_ordering(149) 00:12:04.526 fused_ordering(150) 00:12:04.526 fused_ordering(151) 00:12:04.526 fused_ordering(152) 00:12:04.526 fused_ordering(153) 00:12:04.526 fused_ordering(154) 00:12:04.526 fused_ordering(155) 00:12:04.526 fused_ordering(156) 00:12:04.526 fused_ordering(157) 00:12:04.526 fused_ordering(158) 00:12:04.526 fused_ordering(159) 00:12:04.526 fused_ordering(160) 00:12:04.526 fused_ordering(161) 00:12:04.526 fused_ordering(162) 00:12:04.526 fused_ordering(163) 00:12:04.526 fused_ordering(164) 00:12:04.526 fused_ordering(165) 00:12:04.526 fused_ordering(166) 00:12:04.526 fused_ordering(167) 00:12:04.526 fused_ordering(168) 00:12:04.526 fused_ordering(169) 00:12:04.526 fused_ordering(170) 00:12:04.526 fused_ordering(171) 00:12:04.526 fused_ordering(172) 00:12:04.526 fused_ordering(173) 00:12:04.526 fused_ordering(174) 00:12:04.526 fused_ordering(175) 00:12:04.526 fused_ordering(176) 00:12:04.526 fused_ordering(177) 00:12:04.526 fused_ordering(178) 00:12:04.526 fused_ordering(179) 00:12:04.526 fused_ordering(180) 00:12:04.526 fused_ordering(181) 00:12:04.526 fused_ordering(182) 00:12:04.526 fused_ordering(183) 00:12:04.526 fused_ordering(184) 00:12:04.526 fused_ordering(185) 00:12:04.526 fused_ordering(186) 00:12:04.526 fused_ordering(187) 00:12:04.526 fused_ordering(188) 00:12:04.526 fused_ordering(189) 00:12:04.526 fused_ordering(190) 00:12:04.526 fused_ordering(191) 00:12:04.526 fused_ordering(192) 00:12:04.526 fused_ordering(193) 00:12:04.526 fused_ordering(194) 00:12:04.526 fused_ordering(195) 00:12:04.526 fused_ordering(196) 00:12:04.526 fused_ordering(197) 00:12:04.526 fused_ordering(198) 00:12:04.526 fused_ordering(199) 00:12:04.526 fused_ordering(200) 00:12:04.526 fused_ordering(201) 00:12:04.526 fused_ordering(202) 00:12:04.526 fused_ordering(203) 00:12:04.526 fused_ordering(204) 00:12:04.526 fused_ordering(205) 00:12:04.526 fused_ordering(206) 00:12:04.526 fused_ordering(207) 00:12:04.526 fused_ordering(208) 00:12:04.526 fused_ordering(209) 00:12:04.526 fused_ordering(210) 00:12:04.526 fused_ordering(211) 00:12:04.526 fused_ordering(212) 00:12:04.526 fused_ordering(213) 00:12:04.526 fused_ordering(214) 00:12:04.526 fused_ordering(215) 00:12:04.526 fused_ordering(216) 00:12:04.526 fused_ordering(217) 00:12:04.526 fused_ordering(218) 00:12:04.526 fused_ordering(219) 00:12:04.526 fused_ordering(220) 00:12:04.526 fused_ordering(221) 00:12:04.526 fused_ordering(222) 00:12:04.526 fused_ordering(223) 00:12:04.526 fused_ordering(224) 00:12:04.526 fused_ordering(225) 00:12:04.526 fused_ordering(226) 00:12:04.526 fused_ordering(227) 00:12:04.526 fused_ordering(228) 00:12:04.526 fused_ordering(229) 00:12:04.526 fused_ordering(230) 00:12:04.526 fused_ordering(231) 00:12:04.526 fused_ordering(232) 00:12:04.526 fused_ordering(233) 00:12:04.526 fused_ordering(234) 00:12:04.526 fused_ordering(235) 00:12:04.526 fused_ordering(236) 00:12:04.526 fused_ordering(237) 00:12:04.526 fused_ordering(238) 00:12:04.526 fused_ordering(239) 00:12:04.526 fused_ordering(240) 00:12:04.526 fused_ordering(241) 00:12:04.526 fused_ordering(242) 00:12:04.526 fused_ordering(243) 00:12:04.526 fused_ordering(244) 00:12:04.526 fused_ordering(245) 00:12:04.526 fused_ordering(246) 00:12:04.526 fused_ordering(247) 00:12:04.526 fused_ordering(248) 00:12:04.526 fused_ordering(249) 00:12:04.526 fused_ordering(250) 00:12:04.526 fused_ordering(251) 00:12:04.526 fused_ordering(252) 00:12:04.526 fused_ordering(253) 00:12:04.526 fused_ordering(254) 00:12:04.526 fused_ordering(255) 00:12:04.526 fused_ordering(256) 00:12:04.526 fused_ordering(257) 00:12:04.526 fused_ordering(258) 00:12:04.526 fused_ordering(259) 00:12:04.526 fused_ordering(260) 00:12:04.526 fused_ordering(261) 00:12:04.526 fused_ordering(262) 00:12:04.526 fused_ordering(263) 00:12:04.526 fused_ordering(264) 00:12:04.526 fused_ordering(265) 00:12:04.526 fused_ordering(266) 00:12:04.526 fused_ordering(267) 00:12:04.526 fused_ordering(268) 00:12:04.526 fused_ordering(269) 00:12:04.526 fused_ordering(270) 00:12:04.526 fused_ordering(271) 00:12:04.526 fused_ordering(272) 00:12:04.526 fused_ordering(273) 00:12:04.526 fused_ordering(274) 00:12:04.526 fused_ordering(275) 00:12:04.526 fused_ordering(276) 00:12:04.526 fused_ordering(277) 00:12:04.526 fused_ordering(278) 00:12:04.526 fused_ordering(279) 00:12:04.526 fused_ordering(280) 00:12:04.526 fused_ordering(281) 00:12:04.526 fused_ordering(282) 00:12:04.526 fused_ordering(283) 00:12:04.526 fused_ordering(284) 00:12:04.526 fused_ordering(285) 00:12:04.526 fused_ordering(286) 00:12:04.526 fused_ordering(287) 00:12:04.526 fused_ordering(288) 00:12:04.526 fused_ordering(289) 00:12:04.526 fused_ordering(290) 00:12:04.526 fused_ordering(291) 00:12:04.526 fused_ordering(292) 00:12:04.526 fused_ordering(293) 00:12:04.526 fused_ordering(294) 00:12:04.526 fused_ordering(295) 00:12:04.526 fused_ordering(296) 00:12:04.526 fused_ordering(297) 00:12:04.526 fused_ordering(298) 00:12:04.526 fused_ordering(299) 00:12:04.526 fused_ordering(300) 00:12:04.526 fused_ordering(301) 00:12:04.526 fused_ordering(302) 00:12:04.526 fused_ordering(303) 00:12:04.526 fused_ordering(304) 00:12:04.526 fused_ordering(305) 00:12:04.526 fused_ordering(306) 00:12:04.526 fused_ordering(307) 00:12:04.526 fused_ordering(308) 00:12:04.526 fused_ordering(309) 00:12:04.526 fused_ordering(310) 00:12:04.526 fused_ordering(311) 00:12:04.526 fused_ordering(312) 00:12:04.526 fused_ordering(313) 00:12:04.526 fused_ordering(314) 00:12:04.526 fused_ordering(315) 00:12:04.526 fused_ordering(316) 00:12:04.526 fused_ordering(317) 00:12:04.526 fused_ordering(318) 00:12:04.526 fused_ordering(319) 00:12:04.526 fused_ordering(320) 00:12:04.526 fused_ordering(321) 00:12:04.526 fused_ordering(322) 00:12:04.526 fused_ordering(323) 00:12:04.526 fused_ordering(324) 00:12:04.526 fused_ordering(325) 00:12:04.526 fused_ordering(326) 00:12:04.526 fused_ordering(327) 00:12:04.526 fused_ordering(328) 00:12:04.526 fused_ordering(329) 00:12:04.526 fused_ordering(330) 00:12:04.526 fused_ordering(331) 00:12:04.526 fused_ordering(332) 00:12:04.526 fused_ordering(333) 00:12:04.526 fused_ordering(334) 00:12:04.526 fused_ordering(335) 00:12:04.526 fused_ordering(336) 00:12:04.526 fused_ordering(337) 00:12:04.526 fused_ordering(338) 00:12:04.526 fused_ordering(339) 00:12:04.526 fused_ordering(340) 00:12:04.526 fused_ordering(341) 00:12:04.526 fused_ordering(342) 00:12:04.526 fused_ordering(343) 00:12:04.526 fused_ordering(344) 00:12:04.526 fused_ordering(345) 00:12:04.526 fused_ordering(346) 00:12:04.526 fused_ordering(347) 00:12:04.526 fused_ordering(348) 00:12:04.526 fused_ordering(349) 00:12:04.526 fused_ordering(350) 00:12:04.526 fused_ordering(351) 00:12:04.526 fused_ordering(352) 00:12:04.526 fused_ordering(353) 00:12:04.526 fused_ordering(354) 00:12:04.526 fused_ordering(355) 00:12:04.526 fused_ordering(356) 00:12:04.526 fused_ordering(357) 00:12:04.526 fused_ordering(358) 00:12:04.526 fused_ordering(359) 00:12:04.526 fused_ordering(360) 00:12:04.526 fused_ordering(361) 00:12:04.527 fused_ordering(362) 00:12:04.527 fused_ordering(363) 00:12:04.527 fused_ordering(364) 00:12:04.527 fused_ordering(365) 00:12:04.527 fused_ordering(366) 00:12:04.527 fused_ordering(367) 00:12:04.527 fused_ordering(368) 00:12:04.527 fused_ordering(369) 00:12:04.527 fused_ordering(370) 00:12:04.527 fused_ordering(371) 00:12:04.527 fused_ordering(372) 00:12:04.527 fused_ordering(373) 00:12:04.527 fused_ordering(374) 00:12:04.527 fused_ordering(375) 00:12:04.527 fused_ordering(376) 00:12:04.527 fused_ordering(377) 00:12:04.527 fused_ordering(378) 00:12:04.527 fused_ordering(379) 00:12:04.527 fused_ordering(380) 00:12:04.527 fused_ordering(381) 00:12:04.527 fused_ordering(382) 00:12:04.527 fused_ordering(383) 00:12:04.527 fused_ordering(384) 00:12:04.527 fused_ordering(385) 00:12:04.527 fused_ordering(386) 00:12:04.527 fused_ordering(387) 00:12:04.527 fused_ordering(388) 00:12:04.527 fused_ordering(389) 00:12:04.527 fused_ordering(390) 00:12:04.527 fused_ordering(391) 00:12:04.527 fused_ordering(392) 00:12:04.527 fused_ordering(393) 00:12:04.527 fused_ordering(394) 00:12:04.527 fused_ordering(395) 00:12:04.527 fused_ordering(396) 00:12:04.527 fused_ordering(397) 00:12:04.527 fused_ordering(398) 00:12:04.527 fused_ordering(399) 00:12:04.527 fused_ordering(400) 00:12:04.527 fused_ordering(401) 00:12:04.527 fused_ordering(402) 00:12:04.527 fused_ordering(403) 00:12:04.527 fused_ordering(404) 00:12:04.527 fused_ordering(405) 00:12:04.527 fused_ordering(406) 00:12:04.527 fused_ordering(407) 00:12:04.527 fused_ordering(408) 00:12:04.527 fused_ordering(409) 00:12:04.527 fused_ordering(410) 00:12:04.794 fused_ordering(411) 00:12:04.794 fused_ordering(412) 00:12:04.794 fused_ordering(413) 00:12:04.794 fused_ordering(414) 00:12:04.794 fused_ordering(415) 00:12:04.794 fused_ordering(416) 00:12:04.794 fused_ordering(417) 00:12:04.794 fused_ordering(418) 00:12:04.794 fused_ordering(419) 00:12:04.794 fused_ordering(420) 00:12:04.794 fused_ordering(421) 00:12:04.794 fused_ordering(422) 00:12:04.794 fused_ordering(423) 00:12:04.794 fused_ordering(424) 00:12:04.794 fused_ordering(425) 00:12:04.794 fused_ordering(426) 00:12:04.794 fused_ordering(427) 00:12:04.794 fused_ordering(428) 00:12:04.794 fused_ordering(429) 00:12:04.794 fused_ordering(430) 00:12:04.794 fused_ordering(431) 00:12:04.794 fused_ordering(432) 00:12:04.794 fused_ordering(433) 00:12:04.794 fused_ordering(434) 00:12:04.794 fused_ordering(435) 00:12:04.794 fused_ordering(436) 00:12:04.794 fused_ordering(437) 00:12:04.794 fused_ordering(438) 00:12:04.794 fused_ordering(439) 00:12:04.794 fused_ordering(440) 00:12:04.794 fused_ordering(441) 00:12:04.794 fused_ordering(442) 00:12:04.794 fused_ordering(443) 00:12:04.794 fused_ordering(444) 00:12:04.794 fused_ordering(445) 00:12:04.794 fused_ordering(446) 00:12:04.794 fused_ordering(447) 00:12:04.794 fused_ordering(448) 00:12:04.794 fused_ordering(449) 00:12:04.794 fused_ordering(450) 00:12:04.794 fused_ordering(451) 00:12:04.794 fused_ordering(452) 00:12:04.794 fused_ordering(453) 00:12:04.794 fused_ordering(454) 00:12:04.794 fused_ordering(455) 00:12:04.794 fused_ordering(456) 00:12:04.794 fused_ordering(457) 00:12:04.794 fused_ordering(458) 00:12:04.794 fused_ordering(459) 00:12:04.794 fused_ordering(460) 00:12:04.794 fused_ordering(461) 00:12:04.794 fused_ordering(462) 00:12:04.794 fused_ordering(463) 00:12:04.794 fused_ordering(464) 00:12:04.794 fused_ordering(465) 00:12:04.794 fused_ordering(466) 00:12:04.794 fused_ordering(467) 00:12:04.794 fused_ordering(468) 00:12:04.794 fused_ordering(469) 00:12:04.794 fused_ordering(470) 00:12:04.794 fused_ordering(471) 00:12:04.794 fused_ordering(472) 00:12:04.794 fused_ordering(473) 00:12:04.794 fused_ordering(474) 00:12:04.794 fused_ordering(475) 00:12:04.794 fused_ordering(476) 00:12:04.794 fused_ordering(477) 00:12:04.794 fused_ordering(478) 00:12:04.794 fused_ordering(479) 00:12:04.794 fused_ordering(480) 00:12:04.794 fused_ordering(481) 00:12:04.794 fused_ordering(482) 00:12:04.794 fused_ordering(483) 00:12:04.794 fused_ordering(484) 00:12:04.794 fused_ordering(485) 00:12:04.794 fused_ordering(486) 00:12:04.794 fused_ordering(487) 00:12:04.794 fused_ordering(488) 00:12:04.794 fused_ordering(489) 00:12:04.794 fused_ordering(490) 00:12:04.794 fused_ordering(491) 00:12:04.794 fused_ordering(492) 00:12:04.794 fused_ordering(493) 00:12:04.794 fused_ordering(494) 00:12:04.794 fused_ordering(495) 00:12:04.794 fused_ordering(496) 00:12:04.794 fused_ordering(497) 00:12:04.794 fused_ordering(498) 00:12:04.794 fused_ordering(499) 00:12:04.794 fused_ordering(500) 00:12:04.794 fused_ordering(501) 00:12:04.794 fused_ordering(502) 00:12:04.794 fused_ordering(503) 00:12:04.794 fused_ordering(504) 00:12:04.794 fused_ordering(505) 00:12:04.794 fused_ordering(506) 00:12:04.794 fused_ordering(507) 00:12:04.794 fused_ordering(508) 00:12:04.794 fused_ordering(509) 00:12:04.794 fused_ordering(510) 00:12:04.794 fused_ordering(511) 00:12:04.794 fused_ordering(512) 00:12:04.794 fused_ordering(513) 00:12:04.794 fused_ordering(514) 00:12:04.794 fused_ordering(515) 00:12:04.794 fused_ordering(516) 00:12:04.794 fused_ordering(517) 00:12:04.794 fused_ordering(518) 00:12:04.794 fused_ordering(519) 00:12:04.794 fused_ordering(520) 00:12:04.794 fused_ordering(521) 00:12:04.794 fused_ordering(522) 00:12:04.794 fused_ordering(523) 00:12:04.794 fused_ordering(524) 00:12:04.794 fused_ordering(525) 00:12:04.794 fused_ordering(526) 00:12:04.794 fused_ordering(527) 00:12:04.794 fused_ordering(528) 00:12:04.794 fused_ordering(529) 00:12:04.794 fused_ordering(530) 00:12:04.794 fused_ordering(531) 00:12:04.794 fused_ordering(532) 00:12:04.794 fused_ordering(533) 00:12:04.795 fused_ordering(534) 00:12:04.795 fused_ordering(535) 00:12:04.795 fused_ordering(536) 00:12:04.795 fused_ordering(537) 00:12:04.795 fused_ordering(538) 00:12:04.795 fused_ordering(539) 00:12:04.795 fused_ordering(540) 00:12:04.795 fused_ordering(541) 00:12:04.795 fused_ordering(542) 00:12:04.795 fused_ordering(543) 00:12:04.795 fused_ordering(544) 00:12:04.795 fused_ordering(545) 00:12:04.795 fused_ordering(546) 00:12:04.795 fused_ordering(547) 00:12:04.795 fused_ordering(548) 00:12:04.795 fused_ordering(549) 00:12:04.795 fused_ordering(550) 00:12:04.795 fused_ordering(551) 00:12:04.795 fused_ordering(552) 00:12:04.795 fused_ordering(553) 00:12:04.795 fused_ordering(554) 00:12:04.795 fused_ordering(555) 00:12:04.795 fused_ordering(556) 00:12:04.795 fused_ordering(557) 00:12:04.795 fused_ordering(558) 00:12:04.795 fused_ordering(559) 00:12:04.795 fused_ordering(560) 00:12:04.795 fused_ordering(561) 00:12:04.795 fused_ordering(562) 00:12:04.795 fused_ordering(563) 00:12:04.795 fused_ordering(564) 00:12:04.795 fused_ordering(565) 00:12:04.795 fused_ordering(566) 00:12:04.795 fused_ordering(567) 00:12:04.795 fused_ordering(568) 00:12:04.795 fused_ordering(569) 00:12:04.795 fused_ordering(570) 00:12:04.795 fused_ordering(571) 00:12:04.795 fused_ordering(572) 00:12:04.795 fused_ordering(573) 00:12:04.795 fused_ordering(574) 00:12:04.795 fused_ordering(575) 00:12:04.795 fused_ordering(576) 00:12:04.795 fused_ordering(577) 00:12:04.795 fused_ordering(578) 00:12:04.795 fused_ordering(579) 00:12:04.795 fused_ordering(580) 00:12:04.795 fused_ordering(581) 00:12:04.795 fused_ordering(582) 00:12:04.795 fused_ordering(583) 00:12:04.795 fused_ordering(584) 00:12:04.795 fused_ordering(585) 00:12:04.795 fused_ordering(586) 00:12:04.795 fused_ordering(587) 00:12:04.795 fused_ordering(588) 00:12:04.795 fused_ordering(589) 00:12:04.795 fused_ordering(590) 00:12:04.795 fused_ordering(591) 00:12:04.795 fused_ordering(592) 00:12:04.795 fused_ordering(593) 00:12:04.795 fused_ordering(594) 00:12:04.795 fused_ordering(595) 00:12:04.795 fused_ordering(596) 00:12:04.795 fused_ordering(597) 00:12:04.795 fused_ordering(598) 00:12:04.795 fused_ordering(599) 00:12:04.795 fused_ordering(600) 00:12:04.795 fused_ordering(601) 00:12:04.795 fused_ordering(602) 00:12:04.795 fused_ordering(603) 00:12:04.795 fused_ordering(604) 00:12:04.795 fused_ordering(605) 00:12:04.795 fused_ordering(606) 00:12:04.795 fused_ordering(607) 00:12:04.795 fused_ordering(608) 00:12:04.795 fused_ordering(609) 00:12:04.795 fused_ordering(610) 00:12:04.795 fused_ordering(611) 00:12:04.795 fused_ordering(612) 00:12:04.795 fused_ordering(613) 00:12:04.795 fused_ordering(614) 00:12:04.795 fused_ordering(615) 00:12:04.795 fused_ordering(616) 00:12:04.795 fused_ordering(617) 00:12:04.795 fused_ordering(618) 00:12:04.795 fused_ordering(619) 00:12:04.795 fused_ordering(620) 00:12:04.795 fused_ordering(621) 00:12:04.795 fused_ordering(622) 00:12:04.795 fused_ordering(623) 00:12:04.795 fused_ordering(624) 00:12:04.795 fused_ordering(625) 00:12:04.795 fused_ordering(626) 00:12:04.795 fused_ordering(627) 00:12:04.795 fused_ordering(628) 00:12:04.795 fused_ordering(629) 00:12:04.795 fused_ordering(630) 00:12:04.795 fused_ordering(631) 00:12:04.795 fused_ordering(632) 00:12:04.795 fused_ordering(633) 00:12:04.795 fused_ordering(634) 00:12:04.795 fused_ordering(635) 00:12:04.795 fused_ordering(636) 00:12:04.795 fused_ordering(637) 00:12:04.795 fused_ordering(638) 00:12:04.795 fused_ordering(639) 00:12:04.795 fused_ordering(640) 00:12:04.795 fused_ordering(641) 00:12:04.795 fused_ordering(642) 00:12:04.795 fused_ordering(643) 00:12:04.795 fused_ordering(644) 00:12:04.795 fused_ordering(645) 00:12:04.795 fused_ordering(646) 00:12:04.795 fused_ordering(647) 00:12:04.795 fused_ordering(648) 00:12:04.795 fused_ordering(649) 00:12:04.795 fused_ordering(650) 00:12:04.795 fused_ordering(651) 00:12:04.795 fused_ordering(652) 00:12:04.795 fused_ordering(653) 00:12:04.795 fused_ordering(654) 00:12:04.795 fused_ordering(655) 00:12:04.795 fused_ordering(656) 00:12:04.795 fused_ordering(657) 00:12:04.795 fused_ordering(658) 00:12:04.795 fused_ordering(659) 00:12:04.795 fused_ordering(660) 00:12:04.795 fused_ordering(661) 00:12:04.795 fused_ordering(662) 00:12:04.795 fused_ordering(663) 00:12:04.795 fused_ordering(664) 00:12:04.795 fused_ordering(665) 00:12:04.795 fused_ordering(666) 00:12:04.795 fused_ordering(667) 00:12:04.795 fused_ordering(668) 00:12:04.795 fused_ordering(669) 00:12:04.795 fused_ordering(670) 00:12:04.795 fused_ordering(671) 00:12:04.795 fused_ordering(672) 00:12:04.795 fused_ordering(673) 00:12:04.795 fused_ordering(674) 00:12:04.795 fused_ordering(675) 00:12:04.795 fused_ordering(676) 00:12:04.795 fused_ordering(677) 00:12:04.795 fused_ordering(678) 00:12:04.795 fused_ordering(679) 00:12:04.795 fused_ordering(680) 00:12:04.795 fused_ordering(681) 00:12:04.795 fused_ordering(682) 00:12:04.795 fused_ordering(683) 00:12:04.795 fused_ordering(684) 00:12:04.795 fused_ordering(685) 00:12:04.795 fused_ordering(686) 00:12:04.795 fused_ordering(687) 00:12:04.795 fused_ordering(688) 00:12:04.795 fused_ordering(689) 00:12:04.795 fused_ordering(690) 00:12:04.795 fused_ordering(691) 00:12:04.795 fused_ordering(692) 00:12:04.795 fused_ordering(693) 00:12:04.795 fused_ordering(694) 00:12:04.795 fused_ordering(695) 00:12:04.795 fused_ordering(696) 00:12:04.795 fused_ordering(697) 00:12:04.795 fused_ordering(698) 00:12:04.795 fused_ordering(699) 00:12:04.795 fused_ordering(700) 00:12:04.795 fused_ordering(701) 00:12:04.795 fused_ordering(702) 00:12:04.795 fused_ordering(703) 00:12:04.795 fused_ordering(704) 00:12:04.795 fused_ordering(705) 00:12:04.795 fused_ordering(706) 00:12:04.795 fused_ordering(707) 00:12:04.795 fused_ordering(708) 00:12:04.795 fused_ordering(709) 00:12:04.795 fused_ordering(710) 00:12:04.795 fused_ordering(711) 00:12:04.795 fused_ordering(712) 00:12:04.795 fused_ordering(713) 00:12:04.795 fused_ordering(714) 00:12:04.795 fused_ordering(715) 00:12:04.795 fused_ordering(716) 00:12:04.795 fused_ordering(717) 00:12:04.795 fused_ordering(718) 00:12:04.795 fused_ordering(719) 00:12:04.795 fused_ordering(720) 00:12:04.795 fused_ordering(721) 00:12:04.795 fused_ordering(722) 00:12:04.795 fused_ordering(723) 00:12:04.795 fused_ordering(724) 00:12:04.795 fused_ordering(725) 00:12:04.795 fused_ordering(726) 00:12:04.795 fused_ordering(727) 00:12:04.795 fused_ordering(728) 00:12:04.795 fused_ordering(729) 00:12:04.795 fused_ordering(730) 00:12:04.795 fused_ordering(731) 00:12:04.795 fused_ordering(732) 00:12:04.795 fused_ordering(733) 00:12:04.795 fused_ordering(734) 00:12:04.795 fused_ordering(735) 00:12:04.795 fused_ordering(736) 00:12:04.795 fused_ordering(737) 00:12:04.795 fused_ordering(738) 00:12:04.795 fused_ordering(739) 00:12:04.795 fused_ordering(740) 00:12:04.795 fused_ordering(741) 00:12:04.795 fused_ordering(742) 00:12:04.795 fused_ordering(743) 00:12:04.795 fused_ordering(744) 00:12:04.795 fused_ordering(745) 00:12:04.795 fused_ordering(746) 00:12:04.795 fused_ordering(747) 00:12:04.795 fused_ordering(748) 00:12:04.795 fused_ordering(749) 00:12:04.795 fused_ordering(750) 00:12:04.795 fused_ordering(751) 00:12:04.795 fused_ordering(752) 00:12:04.795 fused_ordering(753) 00:12:04.795 fused_ordering(754) 00:12:04.795 fused_ordering(755) 00:12:04.795 fused_ordering(756) 00:12:04.795 fused_ordering(757) 00:12:04.795 fused_ordering(758) 00:12:04.795 fused_ordering(759) 00:12:04.795 fused_ordering(760) 00:12:04.795 fused_ordering(761) 00:12:04.795 fused_ordering(762) 00:12:04.795 fused_ordering(763) 00:12:04.795 fused_ordering(764) 00:12:04.795 fused_ordering(765) 00:12:04.795 fused_ordering(766) 00:12:04.795 fused_ordering(767) 00:12:04.795 fused_ordering(768) 00:12:04.795 fused_ordering(769) 00:12:04.795 fused_ordering(770) 00:12:04.795 fused_ordering(771) 00:12:04.795 fused_ordering(772) 00:12:04.795 fused_ordering(773) 00:12:04.795 fused_ordering(774) 00:12:04.795 fused_ordering(775) 00:12:04.795 fused_ordering(776) 00:12:04.795 fused_ordering(777) 00:12:04.795 fused_ordering(778) 00:12:04.795 fused_ordering(779) 00:12:04.795 fused_ordering(780) 00:12:04.795 fused_ordering(781) 00:12:04.795 fused_ordering(782) 00:12:04.795 fused_ordering(783) 00:12:04.795 fused_ordering(784) 00:12:04.795 fused_ordering(785) 00:12:04.796 fused_ordering(786) 00:12:04.796 fused_ordering(787) 00:12:04.796 fused_ordering(788) 00:12:04.796 fused_ordering(789) 00:12:04.796 fused_ordering(790) 00:12:04.796 fused_ordering(791) 00:12:04.796 fused_ordering(792) 00:12:04.796 fused_ordering(793) 00:12:04.796 fused_ordering(794) 00:12:04.796 fused_ordering(795) 00:12:04.796 fused_ordering(796) 00:12:04.796 fused_ordering(797) 00:12:04.796 fused_ordering(798) 00:12:04.796 fused_ordering(799) 00:12:04.796 fused_ordering(800) 00:12:04.796 fused_ordering(801) 00:12:04.796 fused_ordering(802) 00:12:04.796 fused_ordering(803) 00:12:04.796 fused_ordering(804) 00:12:04.796 fused_ordering(805) 00:12:04.796 fused_ordering(806) 00:12:04.796 fused_ordering(807) 00:12:04.796 fused_ordering(808) 00:12:04.796 fused_ordering(809) 00:12:04.796 fused_ordering(810) 00:12:04.796 fused_ordering(811) 00:12:04.796 fused_ordering(812) 00:12:04.796 fused_ordering(813) 00:12:04.796 fused_ordering(814) 00:12:04.796 fused_ordering(815) 00:12:04.796 fused_ordering(816) 00:12:04.796 fused_ordering(817) 00:12:04.796 fused_ordering(818) 00:12:04.796 fused_ordering(819) 00:12:04.796 fused_ordering(820) 00:12:05.055 fused_ordering(821) 00:12:05.055 fused_ordering(822) 00:12:05.055 fused_ordering(823) 00:12:05.055 fused_ordering(824) 00:12:05.055 fused_ordering(825) 00:12:05.055 fused_ordering(826) 00:12:05.055 fused_ordering(827) 00:12:05.055 fused_ordering(828) 00:12:05.055 fused_ordering(829) 00:12:05.055 fused_ordering(830) 00:12:05.055 fused_ordering(831) 00:12:05.055 fused_ordering(832) 00:12:05.055 fused_ordering(833) 00:12:05.055 fused_ordering(834) 00:12:05.055 fused_ordering(835) 00:12:05.055 fused_ordering(836) 00:12:05.055 fused_ordering(837) 00:12:05.055 fused_ordering(838) 00:12:05.055 fused_ordering(839) 00:12:05.055 fused_ordering(840) 00:12:05.055 fused_ordering(841) 00:12:05.055 fused_ordering(842) 00:12:05.055 fused_ordering(843) 00:12:05.055 fused_ordering(844) 00:12:05.055 fused_ordering(845) 00:12:05.055 fused_ordering(846) 00:12:05.055 fused_ordering(847) 00:12:05.055 fused_ordering(848) 00:12:05.055 fused_ordering(849) 00:12:05.055 fused_ordering(850) 00:12:05.055 fused_ordering(851) 00:12:05.055 fused_ordering(852) 00:12:05.055 fused_ordering(853) 00:12:05.055 fused_ordering(854) 00:12:05.055 fused_ordering(855) 00:12:05.055 fused_ordering(856) 00:12:05.055 fused_ordering(857) 00:12:05.055 fused_ordering(858) 00:12:05.055 fused_ordering(859) 00:12:05.055 fused_ordering(860) 00:12:05.055 fused_ordering(861) 00:12:05.055 fused_ordering(862) 00:12:05.055 fused_ordering(863) 00:12:05.055 fused_ordering(864) 00:12:05.055 fused_ordering(865) 00:12:05.055 fused_ordering(866) 00:12:05.055 fused_ordering(867) 00:12:05.055 fused_ordering(868) 00:12:05.055 fused_ordering(869) 00:12:05.055 fused_ordering(870) 00:12:05.055 fused_ordering(871) 00:12:05.055 fused_ordering(872) 00:12:05.055 fused_ordering(873) 00:12:05.055 fused_ordering(874) 00:12:05.055 fused_ordering(875) 00:12:05.055 fused_ordering(876) 00:12:05.055 fused_ordering(877) 00:12:05.055 fused_ordering(878) 00:12:05.055 fused_ordering(879) 00:12:05.055 fused_ordering(880) 00:12:05.055 fused_ordering(881) 00:12:05.055 fused_ordering(882) 00:12:05.055 fused_ordering(883) 00:12:05.055 fused_ordering(884) 00:12:05.055 fused_ordering(885) 00:12:05.055 fused_ordering(886) 00:12:05.055 fused_ordering(887) 00:12:05.055 fused_ordering(888) 00:12:05.055 fused_ordering(889) 00:12:05.055 fused_ordering(890) 00:12:05.055 fused_ordering(891) 00:12:05.055 fused_ordering(892) 00:12:05.055 fused_ordering(893) 00:12:05.055 fused_ordering(894) 00:12:05.055 fused_ordering(895) 00:12:05.055 fused_ordering(896) 00:12:05.055 fused_ordering(897) 00:12:05.055 fused_ordering(898) 00:12:05.055 fused_ordering(899) 00:12:05.055 fused_ordering(900) 00:12:05.055 fused_ordering(901) 00:12:05.055 fused_ordering(902) 00:12:05.055 fused_ordering(903) 00:12:05.055 fused_ordering(904) 00:12:05.055 fused_ordering(905) 00:12:05.055 fused_ordering(906) 00:12:05.055 fused_ordering(907) 00:12:05.055 fused_ordering(908) 00:12:05.055 fused_ordering(909) 00:12:05.055 fused_ordering(910) 00:12:05.055 fused_ordering(911) 00:12:05.055 fused_ordering(912) 00:12:05.055 fused_ordering(913) 00:12:05.055 fused_ordering(914) 00:12:05.055 fused_ordering(915) 00:12:05.055 fused_ordering(916) 00:12:05.055 fused_ordering(917) 00:12:05.055 fused_ordering(918) 00:12:05.055 fused_ordering(919) 00:12:05.055 fused_ordering(920) 00:12:05.055 fused_ordering(921) 00:12:05.055 fused_ordering(922) 00:12:05.055 fused_ordering(923) 00:12:05.055 fused_ordering(924) 00:12:05.055 fused_ordering(925) 00:12:05.055 fused_ordering(926) 00:12:05.055 fused_ordering(927) 00:12:05.055 fused_ordering(928) 00:12:05.055 fused_ordering(929) 00:12:05.055 fused_ordering(930) 00:12:05.055 fused_ordering(931) 00:12:05.055 fused_ordering(932) 00:12:05.055 fused_ordering(933) 00:12:05.055 fused_ordering(934) 00:12:05.055 fused_ordering(935) 00:12:05.055 fused_ordering(936) 00:12:05.055 fused_ordering(937) 00:12:05.055 fused_ordering(938) 00:12:05.055 fused_ordering(939) 00:12:05.055 fused_ordering(940) 00:12:05.055 fused_ordering(941) 00:12:05.055 fused_ordering(942) 00:12:05.055 fused_ordering(943) 00:12:05.055 fused_ordering(944) 00:12:05.055 fused_ordering(945) 00:12:05.055 fused_ordering(946) 00:12:05.055 fused_ordering(947) 00:12:05.055 fused_ordering(948) 00:12:05.055 fused_ordering(949) 00:12:05.055 fused_ordering(950) 00:12:05.055 fused_ordering(951) 00:12:05.055 fused_ordering(952) 00:12:05.055 fused_ordering(953) 00:12:05.055 fused_ordering(954) 00:12:05.055 fused_ordering(955) 00:12:05.055 fused_ordering(956) 00:12:05.055 fused_ordering(957) 00:12:05.055 fused_ordering(958) 00:12:05.055 fused_ordering(959) 00:12:05.055 fused_ordering(960) 00:12:05.055 fused_ordering(961) 00:12:05.055 fused_ordering(962) 00:12:05.055 fused_ordering(963) 00:12:05.055 fused_ordering(964) 00:12:05.055 fused_ordering(965) 00:12:05.055 fused_ordering(966) 00:12:05.055 fused_ordering(967) 00:12:05.055 fused_ordering(968) 00:12:05.055 fused_ordering(969) 00:12:05.055 fused_ordering(970) 00:12:05.055 fused_ordering(971) 00:12:05.055 fused_ordering(972) 00:12:05.055 fused_ordering(973) 00:12:05.055 fused_ordering(974) 00:12:05.055 fused_ordering(975) 00:12:05.055 fused_ordering(976) 00:12:05.055 fused_ordering(977) 00:12:05.055 fused_ordering(978) 00:12:05.055 fused_ordering(979) 00:12:05.055 fused_ordering(980) 00:12:05.055 fused_ordering(981) 00:12:05.055 fused_ordering(982) 00:12:05.055 fused_ordering(983) 00:12:05.055 fused_ordering(984) 00:12:05.055 fused_ordering(985) 00:12:05.055 fused_ordering(986) 00:12:05.055 fused_ordering(987) 00:12:05.055 fused_ordering(988) 00:12:05.055 fused_ordering(989) 00:12:05.055 fused_ordering(990) 00:12:05.055 fused_ordering(991) 00:12:05.055 fused_ordering(992) 00:12:05.055 fused_ordering(993) 00:12:05.055 fused_ordering(994) 00:12:05.055 fused_ordering(995) 00:12:05.055 fused_ordering(996) 00:12:05.055 fused_ordering(997) 00:12:05.055 fused_ordering(998) 00:12:05.055 fused_ordering(999) 00:12:05.055 fused_ordering(1000) 00:12:05.055 fused_ordering(1001) 00:12:05.055 fused_ordering(1002) 00:12:05.055 fused_ordering(1003) 00:12:05.055 fused_ordering(1004) 00:12:05.055 fused_ordering(1005) 00:12:05.055 fused_ordering(1006) 00:12:05.055 fused_ordering(1007) 00:12:05.055 fused_ordering(1008) 00:12:05.055 fused_ordering(1009) 00:12:05.055 fused_ordering(1010) 00:12:05.055 fused_ordering(1011) 00:12:05.055 fused_ordering(1012) 00:12:05.055 fused_ordering(1013) 00:12:05.055 fused_ordering(1014) 00:12:05.055 fused_ordering(1015) 00:12:05.055 fused_ordering(1016) 00:12:05.055 fused_ordering(1017) 00:12:05.055 fused_ordering(1018) 00:12:05.055 fused_ordering(1019) 00:12:05.056 fused_ordering(1020) 00:12:05.056 fused_ordering(1021) 00:12:05.056 fused_ordering(1022) 00:12:05.056 fused_ordering(1023) 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:05.056 rmmod nvme_rdma 00:12:05.056 rmmod nvme_fabrics 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1568723 ']' 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1568723 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1568723 ']' 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1568723 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1568723 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1568723' 00:12:05.056 killing process with pid 1568723 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1568723 00:12:05.056 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1568723 00:12:05.315 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:05.315 18:04:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:05.315 00:12:05.315 real 0m10.029s 00:12:05.315 user 0m4.862s 00:12:05.315 sys 0m6.479s 00:12:05.315 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.315 18:04:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:05.315 ************************************ 00:12:05.315 END TEST nvmf_fused_ordering 00:12:05.315 ************************************ 00:12:05.315 18:04:05 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:05.315 18:04:05 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:05.315 18:04:05 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:05.315 18:04:05 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.315 18:04:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:05.574 ************************************ 00:12:05.574 START TEST nvmf_delete_subsystem 00:12:05.574 ************************************ 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:05.574 * Looking for test storage... 00:12:05.574 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.574 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:05.575 18:04:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:13.737 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:13.737 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:13.737 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:13.737 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.737 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:13.738 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.738 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:13.738 altname enp217s0f0np0 00:12:13.738 altname ens818f0np0 00:12:13.738 inet 192.168.100.8/24 scope global mlx_0_0 00:12:13.738 valid_lft forever preferred_lft forever 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:13.738 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:13.738 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:13.738 altname enp217s0f1np1 00:12:13.738 altname ens818f1np1 00:12:13.738 inet 192.168.100.9/24 scope global mlx_0_1 00:12:13.738 valid_lft forever preferred_lft forever 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:13.738 18:04:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:13.738 192.168.100.9' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:13.738 192.168.100.9' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:13.738 192.168.100.9' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1572993 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1572993 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1572993 ']' 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.738 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.001 [2024-07-15 18:04:14.140460] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:12:14.001 [2024-07-15 18:04:14.140513] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.001 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.001 [2024-07-15 18:04:14.222773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:14.001 [2024-07-15 18:04:14.296159] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.001 [2024-07-15 18:04:14.296198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.001 [2024-07-15 18:04:14.296208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.001 [2024-07-15 18:04:14.296217] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.001 [2024-07-15 18:04:14.296240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.001 [2024-07-15 18:04:14.296286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.001 [2024-07-15 18:04:14.296289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.568 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.568 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:14.568 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:14.568 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.568 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.827 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:14.827 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.827 18:04:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 [2024-07-15 18:04:15.008874] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e65640/0x1e69b30) succeed. 00:12:14.827 [2024-07-15 18:04:15.017849] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e66af0/0x1eab1c0) succeed. 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 [2024-07-15 18:04:15.106314] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 NULL1 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 Delay0 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1573206 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:14.827 18:04:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:14.827 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.827 [2024-07-15 18:04:15.220279] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:17.362 18:04:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.362 18:04:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.362 18:04:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.929 NVMe io qpair process completion error 00:12:17.929 NVMe io qpair process completion error 00:12:17.929 NVMe io qpair process completion error 00:12:17.929 NVMe io qpair process completion error 00:12:17.929 NVMe io qpair process completion error 00:12:17.929 NVMe io qpair process completion error 00:12:17.930 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.930 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:17.930 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1573206 00:12:17.930 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:18.497 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:18.497 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1573206 00:12:18.497 18:04:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Write completed with error (sct=0, sc=8) 00:12:19.066 starting I/O failed: -6 00:12:19.066 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 starting I/O failed: -6 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Write completed with error (sct=0, sc=8) 00:12:19.067 Read completed with error (sct=0, sc=8) 00:12:19.067 Initializing NVMe Controllers 00:12:19.067 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:19.067 Controller IO queue size 128, less than required. 00:12:19.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:19.067 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:19.067 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:19.067 Initialization complete. Launching workers. 00:12:19.067 ======================================================== 00:12:19.067 Latency(us) 00:12:19.067 Device Information : IOPS MiB/s Average min max 00:12:19.067 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.30 0.04 1596596.61 1000150.08 2985398.21 00:12:19.067 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.30 0.04 1598126.10 1001595.54 2986590.86 00:12:19.067 ======================================================== 00:12:19.067 Total : 160.60 0.08 1597361.35 1000150.08 2986590.86 00:12:19.067 00:12:19.067 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:19.067 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1573206 00:12:19.067 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:19.067 [2024-07-15 18:04:19.317564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:19.067 [2024-07-15 18:04:19.317604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:19.067 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1573206 00:12:19.635 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1573206) - No such process 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1573206 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1573206 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1573206 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.635 [2024-07-15 18:04:19.840863] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1574013 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:19.635 18:04:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:19.635 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.635 [2024-07-15 18:04:19.923329] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:20.202 18:04:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.202 18:04:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:20.202 18:04:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:20.768 18:04:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:20.768 18:04:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:20.768 18:04:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:21.026 18:04:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:21.026 18:04:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:21.027 18:04:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:21.593 18:04:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:21.593 18:04:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:21.593 18:04:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:22.161 18:04:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:22.161 18:04:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:22.161 18:04:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:22.728 18:04:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:22.728 18:04:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:22.728 18:04:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:23.295 18:04:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:23.295 18:04:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:23.295 18:04:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:23.554 18:04:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:23.554 18:04:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:23.554 18:04:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.140 18:04:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.140 18:04:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:24.140 18:04:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.706 18:04:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.706 18:04:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:24.706 18:04:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.273 18:04:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.273 18:04:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:25.273 18:04:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.532 18:04:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.532 18:04:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:25.532 18:04:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.099 18:04:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.099 18:04:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:26.099 18:04:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.671 18:04:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.671 18:04:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:26.671 18:04:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.671 Initializing NVMe Controllers 00:12:26.671 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:26.671 Controller IO queue size 128, less than required. 00:12:26.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:26.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:26.671 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:26.671 Initialization complete. Launching workers. 00:12:26.671 ======================================================== 00:12:26.671 Latency(us) 00:12:26.671 Device Information : IOPS MiB/s Average min max 00:12:26.671 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001334.33 1000057.38 1003867.88 00:12:26.671 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002612.66 1000063.59 1006043.37 00:12:26.671 ======================================================== 00:12:26.671 Total : 256.00 0.12 1001973.50 1000057.38 1006043.37 00:12:26.671 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1574013 00:12:27.303 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1574013) - No such process 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1574013 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:27.303 rmmod nvme_rdma 00:12:27.303 rmmod nvme_fabrics 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:27.303 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1572993 ']' 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1572993 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1572993 ']' 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1572993 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1572993 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1572993' 00:12:27.304 killing process with pid 1572993 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1572993 00:12:27.304 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1572993 00:12:27.563 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.563 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:27.563 00:12:27.563 real 0m22.041s 00:12:27.563 user 0m50.484s 00:12:27.563 sys 0m7.519s 00:12:27.563 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:27.563 18:04:27 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.563 ************************************ 00:12:27.563 END TEST nvmf_delete_subsystem 00:12:27.563 ************************************ 00:12:27.563 18:04:27 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:27.563 18:04:27 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:12:27.563 18:04:27 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:27.563 18:04:27 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:27.563 18:04:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:27.563 ************************************ 00:12:27.563 START TEST nvmf_ns_masking 00:12:27.563 ************************************ 00:12:27.563 18:04:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:12:27.563 * Looking for test storage... 00:12:27.563 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:27.563 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.563 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:27.563 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.563 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.563 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2f958e11-ae68-4b7f-a909-330eff54aee7 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:27.823 18:04:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=277ed489-deed-4720-b6f1-31ec80f72c22 00:12:27.823 18:04:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:27.823 18:04:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:27.823 18:04:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:27.823 18:04:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:27.823 18:04:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3c5bcc89-5b8f-41d7-b135-76ce77f8e74a 00:12:27.823 18:04:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:27.824 18:04:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:35.944 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:35.945 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:35.945 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:35.945 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:35.945 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:35.945 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:35.945 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:35.945 altname enp217s0f0np0 00:12:35.945 altname ens818f0np0 00:12:35.945 inet 192.168.100.8/24 scope global mlx_0_0 00:12:35.945 valid_lft forever preferred_lft forever 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:35.945 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:35.945 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:35.945 altname enp217s0f1np1 00:12:35.945 altname ens818f1np1 00:12:35.945 inet 192.168.100.9/24 scope global mlx_0_1 00:12:35.945 valid_lft forever preferred_lft forever 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:35.945 18:04:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:35.945 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:35.946 192.168.100.9' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:35.946 192.168.100.9' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:35.946 192.168.100.9' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1579386 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1579386 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1579386 ']' 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:35.946 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:35.946 [2024-07-15 18:04:36.135147] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:12:35.946 [2024-07-15 18:04:36.135202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.946 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.946 [2024-07-15 18:04:36.218671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.946 [2024-07-15 18:04:36.290681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.946 [2024-07-15 18:04:36.290721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.946 [2024-07-15 18:04:36.290731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.946 [2024-07-15 18:04:36.290739] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.946 [2024-07-15 18:04:36.290745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.946 [2024-07-15 18:04:36.290766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.883 18:04:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:36.883 [2024-07-15 18:04:37.135586] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f9eb20/0x1fa3010) succeed. 00:12:36.883 [2024-07-15 18:04:37.144485] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fa0020/0x1fe46a0) succeed. 00:12:36.883 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:36.883 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:36.883 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:37.142 Malloc1 00:12:37.142 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:37.142 Malloc2 00:12:37.142 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.400 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:37.658 18:04:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:37.659 [2024-07-15 18:04:37.988496] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:37.659 18:04:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:37.659 18:04:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3c5bcc89-5b8f-41d7-b135-76ce77f8e74a -a 192.168.100.8 -s 4420 -i 4 00:12:37.917 18:04:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.917 18:04:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.917 18:04:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.917 18:04:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:37.917 18:04:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.450 [ 0]:0x1 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c8481d22eef4fbc81568ef098126bc5 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c8481d22eef4fbc81568ef098126bc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.450 [ 0]:0x1 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c8481d22eef4fbc81568ef098126bc5 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c8481d22eef4fbc81568ef098126bc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.450 [ 1]:0x2 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:40.450 18:04:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.709 18:04:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.968 18:04:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:41.226 18:04:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:41.226 18:04:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3c5bcc89-5b8f-41d7-b135-76ce77f8e74a -a 192.168.100.8 -s 4420 -i 4 00:12:41.485 18:04:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:41.485 18:04:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.485 18:04:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.485 18:04:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:41.485 18:04:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:41.485 18:04:41 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.390 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.390 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.390 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.390 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.390 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.391 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:43.391 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.391 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.649 [ 0]:0x2 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.649 18:04:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.908 [ 0]:0x1 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c8481d22eef4fbc81568ef098126bc5 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c8481d22eef4fbc81568ef098126bc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.908 [ 1]:0x2 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.908 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.167 [ 0]:0x2 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:44.167 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.426 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.686 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:44.686 18:04:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3c5bcc89-5b8f-41d7-b135-76ce77f8e74a -a 192.168.100.8 -s 4420 -i 4 00:12:44.946 18:04:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:44.946 18:04:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.946 18:04:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.946 18:04:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:44.946 18:04:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:44.946 18:04:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.851 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.851 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.851 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.112 [ 0]:0x1 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c8481d22eef4fbc81568ef098126bc5 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c8481d22eef4fbc81568ef098126bc5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.112 [ 1]:0x2 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.112 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:47.446 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.447 [ 0]:0x2 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:47.447 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:47.706 [2024-07-15 18:04:47.857532] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:47.706 request: 00:12:47.706 { 00:12:47.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.706 "nsid": 2, 00:12:47.706 "host": "nqn.2016-06.io.spdk:host1", 00:12:47.706 "method": "nvmf_ns_remove_host", 00:12:47.706 "req_id": 1 00:12:47.706 } 00:12:47.706 Got JSON-RPC error response 00:12:47.706 response: 00:12:47.706 { 00:12:47.706 "code": -32602, 00:12:47.706 "message": "Invalid parameters" 00:12:47.706 } 00:12:47.706 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:47.706 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.706 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.707 [ 0]:0x2 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=def8b6dac0254e1bb5bc8d405f4d1a7b 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ def8b6dac0254e1bb5bc8d405f4d1a7b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:47.707 18:04:47 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1581564 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1581564 /var/tmp/host.sock 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1581564 ']' 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:47.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.966 18:04:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.966 [2024-07-15 18:04:48.354480] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:12:47.966 [2024-07-15 18:04:48.354534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1581564 ] 00:12:48.224 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.224 [2024-07-15 18:04:48.440933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.224 [2024-07-15 18:04:48.510265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.792 18:04:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.792 18:04:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:48.792 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.051 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:49.309 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2f958e11-ae68-4b7f-a909-330eff54aee7 00:12:49.309 18:04:49 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:49.309 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2F958E11AE684B7FA909330EFF54AEE7 -i 00:12:49.309 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 277ed489-deed-4720-b6f1-31ec80f72c22 00:12:49.309 18:04:49 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:49.309 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 277ED489DEED4720B6F131EC80F72C22 -i 00:12:49.567 18:04:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.826 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:49.826 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:49.826 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:50.085 nvme0n1 00:12:50.085 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:50.085 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:50.343 nvme1n2 00:12:50.343 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:50.343 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:50.343 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:50.343 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:50.343 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:50.601 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:50.601 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:50.601 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:50.601 18:04:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2f958e11-ae68-4b7f-a909-330eff54aee7 == \2\f\9\5\8\e\1\1\-\a\e\6\8\-\4\b\7\f\-\a\9\0\9\-\3\3\0\e\f\f\5\4\a\e\e\7 ]] 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 277ed489-deed-4720-b6f1-31ec80f72c22 == \2\7\7\e\d\4\8\9\-\d\e\e\d\-\4\7\2\0\-\b\6\f\1\-\3\1\e\c\8\0\f\7\2\c\2\2 ]] 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1581564 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1581564 ']' 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1581564 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1581564 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1581564' 00:12:50.860 killing process with pid 1581564 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1581564 00:12:50.860 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1581564 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:51.428 rmmod nvme_rdma 00:12:51.428 rmmod nvme_fabrics 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1579386 ']' 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1579386 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1579386 ']' 00:12:51.428 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1579386 00:12:51.429 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:51.429 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.429 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1579386 00:12:51.688 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:51.688 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:51.688 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1579386' 00:12:51.688 killing process with pid 1579386 00:12:51.688 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1579386 00:12:51.688 18:04:51 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1579386 00:12:51.947 18:04:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.947 18:04:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:51.947 00:12:51.947 real 0m24.251s 00:12:51.947 user 0m25.661s 00:12:51.947 sys 0m8.469s 00:12:51.947 18:04:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.947 18:04:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.947 ************************************ 00:12:51.947 END TEST nvmf_ns_masking 00:12:51.947 ************************************ 00:12:51.947 18:04:52 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:51.947 18:04:52 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:51.947 18:04:52 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:51.947 18:04:52 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.947 18:04:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.947 18:04:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:51.947 ************************************ 00:12:51.947 START TEST nvmf_nvme_cli 00:12:51.947 ************************************ 00:12:51.947 18:04:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:51.947 * Looking for test storage... 00:12:51.947 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:51.947 18:04:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.947 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.948 18:04:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.075 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:00.075 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:00.076 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:00.076 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:00.076 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:00.076 18:04:59 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:00.076 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:00.076 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:00.076 altname enp217s0f0np0 00:13:00.076 altname ens818f0np0 00:13:00.076 inet 192.168.100.8/24 scope global mlx_0_0 00:13:00.076 valid_lft forever preferred_lft forever 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:00.076 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:00.076 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:00.076 altname enp217s0f1np1 00:13:00.076 altname ens818f1np1 00:13:00.076 inet 192.168.100.9/24 scope global mlx_0_1 00:13:00.076 valid_lft forever preferred_lft forever 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:00.076 192.168.100.9' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:00.076 192.168.100.9' 00:13:00.076 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:00.077 192.168.100.9' 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1586290 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1586290 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1586290 ']' 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:00.077 18:05:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:00.077 [2024-07-15 18:05:00.287574] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:00.077 [2024-07-15 18:05:00.287624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.077 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.077 [2024-07-15 18:05:00.372328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.077 [2024-07-15 18:05:00.449491] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.077 [2024-07-15 18:05:00.449533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.077 [2024-07-15 18:05:00.449542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.077 [2024-07-15 18:05:00.449551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.077 [2024-07-15 18:05:00.449561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.077 [2024-07-15 18:05:00.449602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.077 [2024-07-15 18:05:00.449621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.077 [2024-07-15 18:05:00.449931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.077 [2024-07-15 18:05:00.449933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 [2024-07-15 18:05:01.174343] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10e0f80/0x10e5470) succeed. 00:13:01.013 [2024-07-15 18:05:01.183501] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10e25c0/0x1126b00) succeed. 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 Malloc0 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 Malloc1 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 [2024-07-15 18:05:01.380594] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.013 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:13:01.272 00:13:01.272 Discovery Log Number of Records 2, Generation counter 2 00:13:01.272 =====Discovery Log Entry 0====== 00:13:01.272 trtype: rdma 00:13:01.272 adrfam: ipv4 00:13:01.272 subtype: current discovery subsystem 00:13:01.272 treq: not required 00:13:01.272 portid: 0 00:13:01.272 trsvcid: 4420 00:13:01.272 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:01.272 traddr: 192.168.100.8 00:13:01.272 eflags: explicit discovery connections, duplicate discovery information 00:13:01.272 rdma_prtype: not specified 00:13:01.272 rdma_qptype: connected 00:13:01.272 rdma_cms: rdma-cm 00:13:01.272 rdma_pkey: 0x0000 00:13:01.272 =====Discovery Log Entry 1====== 00:13:01.272 trtype: rdma 00:13:01.272 adrfam: ipv4 00:13:01.272 subtype: nvme subsystem 00:13:01.272 treq: not required 00:13:01.272 portid: 0 00:13:01.272 trsvcid: 4420 00:13:01.272 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:01.272 traddr: 192.168.100.8 00:13:01.272 eflags: none 00:13:01.272 rdma_prtype: not specified 00:13:01.272 rdma_qptype: connected 00:13:01.272 rdma_cms: rdma-cm 00:13:01.273 rdma_pkey: 0x0000 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:01.273 18:05:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:02.224 18:05:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:02.224 18:05:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.224 18:05:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.224 18:05:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:02.224 18:05:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:02.224 18:05:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:04.134 /dev/nvme0n1 ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:04.134 18:05:04 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:04.393 18:05:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:04.393 18:05:04 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:05.328 rmmod nvme_rdma 00:13:05.328 rmmod nvme_fabrics 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1586290 ']' 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1586290 00:13:05.328 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1586290 ']' 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1586290 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1586290 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1586290' 00:13:05.329 killing process with pid 1586290 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1586290 00:13:05.329 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1586290 00:13:05.588 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.588 18:05:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:05.588 00:13:05.588 real 0m13.768s 00:13:05.588 user 0m24.009s 00:13:05.588 sys 0m6.600s 00:13:05.588 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:05.588 18:05:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:05.588 ************************************ 00:13:05.588 END TEST nvmf_nvme_cli 00:13:05.588 ************************************ 00:13:05.847 18:05:06 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:05.847 18:05:06 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:05.847 18:05:06 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:05.847 18:05:06 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:05.847 18:05:06 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.847 18:05:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:05.847 ************************************ 00:13:05.847 START TEST nvmf_host_management 00:13:05.847 ************************************ 00:13:05.847 18:05:06 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:05.847 * Looking for test storage... 00:13:05.847 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:05.847 18:05:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.847 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:05.847 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.847 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.847 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.848 18:05:06 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:14.037 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:14.037 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.037 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:14.038 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:14.038 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:14.038 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.038 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:14.038 altname enp217s0f0np0 00:13:14.038 altname ens818f0np0 00:13:14.038 inet 192.168.100.8/24 scope global mlx_0_0 00:13:14.038 valid_lft forever preferred_lft forever 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.038 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:14.297 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.297 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:14.297 altname enp217s0f1np1 00:13:14.297 altname ens818f1np1 00:13:14.297 inet 192.168.100.9/24 scope global mlx_0_1 00:13:14.297 valid_lft forever preferred_lft forever 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:14.297 192.168.100.9' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:14.297 192.168.100.9' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:14.297 192.168.100.9' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1591324 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1591324 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1591324 ']' 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.297 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.298 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.298 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.298 18:05:14 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:14.298 [2024-07-15 18:05:14.616236] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:14.298 [2024-07-15 18:05:14.616284] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.298 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.556 [2024-07-15 18:05:14.699159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.556 [2024-07-15 18:05:14.774955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.556 [2024-07-15 18:05:14.774998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.556 [2024-07-15 18:05:14.775015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.556 [2024-07-15 18:05:14.775023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.556 [2024-07-15 18:05:14.775030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.556 [2024-07-15 18:05:14.775117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.556 [2024-07-15 18:05:14.775221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.556 [2024-07-15 18:05:14.775728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.556 [2024-07-15 18:05:14.775729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.124 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.124 [2024-07-15 18:05:15.504692] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22020d0/0x22065c0) succeed. 00:13:15.124 [2024-07-15 18:05:15.514052] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2203710/0x2247c50) succeed. 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.382 Malloc0 00:13:15.382 [2024-07-15 18:05:15.693376] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1591622 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1591622 /var/tmp/bdevperf.sock 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1591622 ']' 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:15.382 { 00:13:15.382 "params": { 00:13:15.382 "name": "Nvme$subsystem", 00:13:15.382 "trtype": "$TEST_TRANSPORT", 00:13:15.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.382 "adrfam": "ipv4", 00:13:15.382 "trsvcid": "$NVMF_PORT", 00:13:15.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.382 "hdgst": ${hdgst:-false}, 00:13:15.382 "ddgst": ${ddgst:-false} 00:13:15.382 }, 00:13:15.382 "method": "bdev_nvme_attach_controller" 00:13:15.382 } 00:13:15.382 EOF 00:13:15.382 )") 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:15.382 18:05:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:15.382 "params": { 00:13:15.382 "name": "Nvme0", 00:13:15.382 "trtype": "rdma", 00:13:15.382 "traddr": "192.168.100.8", 00:13:15.382 "adrfam": "ipv4", 00:13:15.382 "trsvcid": "4420", 00:13:15.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:15.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:15.382 "hdgst": false, 00:13:15.382 "ddgst": false 00:13:15.382 }, 00:13:15.382 "method": "bdev_nvme_attach_controller" 00:13:15.382 }' 00:13:15.641 [2024-07-15 18:05:15.797487] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:15.641 [2024-07-15 18:05:15.797539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591622 ] 00:13:15.641 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.641 [2024-07-15 18:05:15.882579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.641 [2024-07-15 18:05:15.953239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.900 Running I/O for 10 seconds... 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1516 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1516 -ge 100 ']' 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.467 18:05:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:17.405 [2024-07-15 18:05:17.694605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:13:17.405 [2024-07-15 18:05:17.694639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:13:17.405 [2024-07-15 18:05:17.694667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:13:17.405 [2024-07-15 18:05:17.694689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:13:17.405 [2024-07-15 18:05:17.694708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:13:17.405 [2024-07-15 18:05:17.694728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:13:17.405 [2024-07-15 18:05:17.694749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:13:17.405 [2024-07-15 18:05:17.694773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.405 [2024-07-15 18:05:17.694784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:13:17.405 [2024-07-15 18:05:17.694793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:13:17.406 [2024-07-15 18:05:17.694972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.694983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:13:17.406 [2024-07-15 18:05:17.694993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:13:17.406 [2024-07-15 18:05:17.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:13:17.406 [2024-07-15 18:05:17.695036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:13:17.406 [2024-07-15 18:05:17.695056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:13:17.406 [2024-07-15 18:05:17.695076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:13:17.406 [2024-07-15 18:05:17.695096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dacd000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daac000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da8b000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da49000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9e6000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9c5000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d9a4000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d983000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d962000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d941000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d920000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.406 [2024-07-15 18:05:17.695583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd1f000 len:0x10000 key:0x182400 00:13:17.406 [2024-07-15 18:05:17.695592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcfe000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcdd000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dcbc000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc9b000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc7a000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc59000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc38000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba30000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbd5000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbb4000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db93000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db72000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db51000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.695923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db30000 len:0x10000 key:0x182400 00:13:17.407 [2024-07-15 18:05:17.695932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:b9e6c000 sqhd:52b0 p:0 m:0 dnr:0 00:13:17.407 [2024-07-15 18:05:17.697945] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:13:17.407 [2024-07-15 18:05:17.698841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:17.407 task offset: 87168 on job bdev=Nvme0n1 fails 00:13:17.407 00:13:17.407 Latency(us) 00:13:17.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:17.407 Job: Nvme0n1 ended in about 1.56 seconds with error 00:13:17.407 Verification LBA range: start 0x0 length 0x400 00:13:17.407 Nvme0n1 : 1.56 1066.54 66.66 41.02 0.00 57339.16 2123.37 1013343.85 00:13:17.407 =================================================================================================================== 00:13:17.407 Total : 1066.54 66.66 41.02 0.00 57339.16 2123.37 1013343.85 00:13:17.407 [2024-07-15 18:05:17.700391] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1591622 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:17.407 { 00:13:17.407 "params": { 00:13:17.407 "name": "Nvme$subsystem", 00:13:17.407 "trtype": "$TEST_TRANSPORT", 00:13:17.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.407 "adrfam": "ipv4", 00:13:17.407 "trsvcid": "$NVMF_PORT", 00:13:17.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.407 "hdgst": ${hdgst:-false}, 00:13:17.407 "ddgst": ${ddgst:-false} 00:13:17.407 }, 00:13:17.407 "method": "bdev_nvme_attach_controller" 00:13:17.407 } 00:13:17.407 EOF 00:13:17.407 )") 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:17.407 18:05:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:17.407 "params": { 00:13:17.407 "name": "Nvme0", 00:13:17.407 "trtype": "rdma", 00:13:17.407 "traddr": "192.168.100.8", 00:13:17.407 "adrfam": "ipv4", 00:13:17.407 "trsvcid": "4420", 00:13:17.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:17.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:17.407 "hdgst": false, 00:13:17.407 "ddgst": false 00:13:17.407 }, 00:13:17.407 "method": "bdev_nvme_attach_controller" 00:13:17.407 }' 00:13:17.407 [2024-07-15 18:05:17.751514] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:17.407 [2024-07-15 18:05:17.751564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1591912 ] 00:13:17.407 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.666 [2024-07-15 18:05:17.836921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.666 [2024-07-15 18:05:17.906570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.924 Running I/O for 1 seconds... 00:13:18.860 00:13:18.860 Latency(us) 00:13:18.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.860 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:18.860 Verification LBA range: start 0x0 length 0x400 00:13:18.860 Nvme0n1 : 1.01 3133.00 195.81 0.00 0.00 20013.56 635.70 42572.19 00:13:18.860 =================================================================================================================== 00:13:18.860 Total : 3133.00 195.81 0.00 0.00 20013.56 635.70 42572.19 00:13:19.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1591622 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:19.119 rmmod nvme_rdma 00:13:19.119 rmmod nvme_fabrics 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1591324 ']' 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1591324 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1591324 ']' 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1591324 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1591324 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1591324' 00:13:19.119 killing process with pid 1591324 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1591324 00:13:19.119 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1591324 00:13:19.378 [2024-07-15 18:05:19.664506] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:19.378 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.378 18:05:19 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:19.378 18:05:19 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:19.378 00:13:19.378 real 0m13.635s 00:13:19.378 user 0m25.174s 00:13:19.378 sys 0m7.564s 00:13:19.378 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:19.378 18:05:19 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:19.378 ************************************ 00:13:19.378 END TEST nvmf_host_management 00:13:19.378 ************************************ 00:13:19.378 18:05:19 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:19.378 18:05:19 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:19.378 18:05:19 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:19.378 18:05:19 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.378 18:05:19 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:19.378 ************************************ 00:13:19.378 START TEST nvmf_lvol 00:13:19.378 ************************************ 00:13:19.378 18:05:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:19.637 * Looking for test storage... 00:13:19.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.637 18:05:19 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.638 18:05:19 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:29.619 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:29.619 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:29.619 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:29.619 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:29.619 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:29.620 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:29.620 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:29.620 altname enp217s0f0np0 00:13:29.620 altname ens818f0np0 00:13:29.620 inet 192.168.100.8/24 scope global mlx_0_0 00:13:29.620 valid_lft forever preferred_lft forever 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:29.620 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:29.620 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:29.620 altname enp217s0f1np1 00:13:29.620 altname ens818f1np1 00:13:29.620 inet 192.168.100.9/24 scope global mlx_0_1 00:13:29.620 valid_lft forever preferred_lft forever 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:29.620 192.168.100.9' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:29.620 192.168.100.9' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:29.620 192.168.100.9' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1596356 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1596356 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1596356 ']' 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:29.620 18:05:28 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:29.620 [2024-07-15 18:05:28.588807] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:29.620 [2024-07-15 18:05:28.588860] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.620 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.620 [2024-07-15 18:05:28.668858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:29.620 [2024-07-15 18:05:28.737800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.620 [2024-07-15 18:05:28.737849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.620 [2024-07-15 18:05:28.737858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.620 [2024-07-15 18:05:28.737866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.620 [2024-07-15 18:05:28.737873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.620 [2024-07-15 18:05:28.737931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.620 [2024-07-15 18:05:28.738039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.620 [2024-07-15 18:05:28.738043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.620 18:05:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.620 18:05:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:29.620 18:05:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.620 18:05:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:29.621 18:05:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:29.621 18:05:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.621 18:05:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:29.621 [2024-07-15 18:05:29.615927] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f17200/0x1f1b6f0) succeed. 00:13:29.621 [2024-07-15 18:05:29.624903] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f187a0/0x1f5cd80) succeed. 00:13:29.621 18:05:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.621 18:05:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:29.621 18:05:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:29.878 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:29.878 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:30.136 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:30.136 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c3bba616-a892-47b5-a3ea-7ce3f26146f3 00:13:30.136 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3bba616-a892-47b5-a3ea-7ce3f26146f3 lvol 20 00:13:30.394 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e12eaa6a-ae45-48ee-aec4-6cbb265ee277 00:13:30.394 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.651 18:05:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e12eaa6a-ae45-48ee-aec4-6cbb265ee277 00:13:30.652 18:05:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:30.909 [2024-07-15 18:05:31.156339] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:30.909 18:05:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:31.167 18:05:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1596919 00:13:31.167 18:05:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:31.167 18:05:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:31.167 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.169 18:05:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e12eaa6a-ae45-48ee-aec4-6cbb265ee277 MY_SNAPSHOT 00:13:32.169 18:05:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=20141c8c-e802-4d59-852e-310bcefcd1e9 00:13:32.169 18:05:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e12eaa6a-ae45-48ee-aec4-6cbb265ee277 30 00:13:32.427 18:05:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 20141c8c-e802-4d59-852e-310bcefcd1e9 MY_CLONE 00:13:32.686 18:05:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fbb61557-26cf-4a8e-9478-e15f06a26a9b 00:13:32.686 18:05:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fbb61557-26cf-4a8e-9478-e15f06a26a9b 00:13:32.944 18:05:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1596919 00:13:42.925 Initializing NVMe Controllers 00:13:42.925 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:42.925 Controller IO queue size 128, less than required. 00:13:42.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:42.925 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:42.925 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:42.925 Initialization complete. Launching workers. 00:13:42.925 ======================================================== 00:13:42.925 Latency(us) 00:13:42.925 Device Information : IOPS MiB/s Average min max 00:13:42.925 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16572.10 64.73 7726.04 2196.97 34587.93 00:13:42.925 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16445.10 64.24 7784.95 3652.74 37065.33 00:13:42.925 ======================================================== 00:13:42.925 Total : 33017.20 128.97 7755.38 2196.97 37065.33 00:13:42.925 00:13:42.925 18:05:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:42.925 18:05:42 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e12eaa6a-ae45-48ee-aec4-6cbb265ee277 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3bba616-a892-47b5-a3ea-7ce3f26146f3 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:42.925 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:42.925 rmmod nvme_rdma 00:13:42.925 rmmod nvme_fabrics 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1596356 ']' 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1596356 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1596356 ']' 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1596356 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1596356 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1596356' 00:13:43.184 killing process with pid 1596356 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1596356 00:13:43.184 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1596356 00:13:43.443 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.443 18:05:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:43.443 00:13:43.443 real 0m23.912s 00:13:43.443 user 1m11.605s 00:13:43.443 sys 0m7.820s 00:13:43.443 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.443 18:05:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:43.443 ************************************ 00:13:43.443 END TEST nvmf_lvol 00:13:43.443 ************************************ 00:13:43.443 18:05:43 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:43.444 18:05:43 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:43.444 18:05:43 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.444 18:05:43 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.444 18:05:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:43.444 ************************************ 00:13:43.444 START TEST nvmf_lvs_grow 00:13:43.444 ************************************ 00:13:43.444 18:05:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:43.703 * Looking for test storage... 00:13:43.703 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.703 18:05:43 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:51.841 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:51.841 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:51.841 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.841 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:51.842 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:51.842 18:05:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:51.842 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.842 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:51.842 altname enp217s0f0np0 00:13:51.842 altname ens818f0np0 00:13:51.842 inet 192.168.100.8/24 scope global mlx_0_0 00:13:51.842 valid_lft forever preferred_lft forever 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:51.842 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.842 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:51.842 altname enp217s0f1np1 00:13:51.842 altname ens818f1np1 00:13:51.842 inet 192.168.100.9/24 scope global mlx_0_1 00:13:51.842 valid_lft forever preferred_lft forever 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:51.842 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:51.843 192.168.100.9' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:51.843 192.168.100.9' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:51.843 192.168.100.9' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1602962 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1602962 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1602962 ']' 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.843 18:05:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:51.843 [2024-07-15 18:05:52.209245] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:51.843 [2024-07-15 18:05:52.209293] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.102 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.102 [2024-07-15 18:05:52.287345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.102 [2024-07-15 18:05:52.359291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.102 [2024-07-15 18:05:52.359329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.102 [2024-07-15 18:05:52.359339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.102 [2024-07-15 18:05:52.359347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.102 [2024-07-15 18:05:52.359370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.102 [2024-07-15 18:05:52.359391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.670 18:05:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:52.929 [2024-07-15 18:05:53.226874] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1045b20/0x104a010) succeed. 00:13:52.929 [2024-07-15 18:05:53.235784] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1047020/0x108b6a0) succeed. 00:13:52.929 18:05:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:52.929 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:52.929 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.929 18:05:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:52.929 ************************************ 00:13:52.929 START TEST lvs_grow_clean 00:13:52.929 ************************************ 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:53.188 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:53.446 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:13:53.446 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:13:53.446 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:53.705 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:53.705 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:53.705 18:05:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb lvol 150 00:13:53.705 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=84c1e974-5fb8-4f68-9863-99dd626462c6 00:13:53.705 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:53.705 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:53.964 [2024-07-15 18:05:54.217714] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:53.964 [2024-07-15 18:05:54.217763] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:53.964 true 00:13:53.964 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:13:53.964 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:54.223 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:54.223 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.223 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 84c1e974-5fb8-4f68-9863-99dd626462c6 00:13:54.482 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:54.742 [2024-07-15 18:05:54.896103] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:54.742 18:05:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1603535 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1603535 /var/tmp/bdevperf.sock 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1603535 ']' 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.742 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:54.742 [2024-07-15 18:05:55.101490] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:13:54.742 [2024-07-15 18:05:55.101539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603535 ] 00:13:54.742 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.002 [2024-07-15 18:05:55.181970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.002 [2024-07-15 18:05:55.251659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.570 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.570 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:55.570 18:05:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:55.829 Nvme0n1 00:13:55.829 18:05:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:56.087 [ 00:13:56.087 { 00:13:56.087 "name": "Nvme0n1", 00:13:56.087 "aliases": [ 00:13:56.087 "84c1e974-5fb8-4f68-9863-99dd626462c6" 00:13:56.087 ], 00:13:56.087 "product_name": "NVMe disk", 00:13:56.087 "block_size": 4096, 00:13:56.087 "num_blocks": 38912, 00:13:56.087 "uuid": "84c1e974-5fb8-4f68-9863-99dd626462c6", 00:13:56.087 "assigned_rate_limits": { 00:13:56.087 "rw_ios_per_sec": 0, 00:13:56.087 "rw_mbytes_per_sec": 0, 00:13:56.087 "r_mbytes_per_sec": 0, 00:13:56.087 "w_mbytes_per_sec": 0 00:13:56.087 }, 00:13:56.087 "claimed": false, 00:13:56.087 "zoned": false, 00:13:56.087 "supported_io_types": { 00:13:56.087 "read": true, 00:13:56.087 "write": true, 00:13:56.087 "unmap": true, 00:13:56.087 "flush": true, 00:13:56.087 "reset": true, 00:13:56.087 "nvme_admin": true, 00:13:56.087 "nvme_io": true, 00:13:56.087 "nvme_io_md": false, 00:13:56.087 "write_zeroes": true, 00:13:56.087 "zcopy": false, 00:13:56.087 "get_zone_info": false, 00:13:56.087 "zone_management": false, 00:13:56.087 "zone_append": false, 00:13:56.087 "compare": true, 00:13:56.087 "compare_and_write": true, 00:13:56.087 "abort": true, 00:13:56.087 "seek_hole": false, 00:13:56.087 "seek_data": false, 00:13:56.087 "copy": true, 00:13:56.087 "nvme_iov_md": false 00:13:56.087 }, 00:13:56.087 "memory_domains": [ 00:13:56.087 { 00:13:56.087 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:56.087 "dma_device_type": 0 00:13:56.087 } 00:13:56.087 ], 00:13:56.087 "driver_specific": { 00:13:56.087 "nvme": [ 00:13:56.087 { 00:13:56.087 "trid": { 00:13:56.087 "trtype": "RDMA", 00:13:56.087 "adrfam": "IPv4", 00:13:56.087 "traddr": "192.168.100.8", 00:13:56.087 "trsvcid": "4420", 00:13:56.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:56.087 }, 00:13:56.087 "ctrlr_data": { 00:13:56.087 "cntlid": 1, 00:13:56.087 "vendor_id": "0x8086", 00:13:56.087 "model_number": "SPDK bdev Controller", 00:13:56.087 "serial_number": "SPDK0", 00:13:56.087 "firmware_revision": "24.09", 00:13:56.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:56.087 "oacs": { 00:13:56.087 "security": 0, 00:13:56.087 "format": 0, 00:13:56.087 "firmware": 0, 00:13:56.087 "ns_manage": 0 00:13:56.087 }, 00:13:56.087 "multi_ctrlr": true, 00:13:56.087 "ana_reporting": false 00:13:56.087 }, 00:13:56.087 "vs": { 00:13:56.087 "nvme_version": "1.3" 00:13:56.087 }, 00:13:56.087 "ns_data": { 00:13:56.087 "id": 1, 00:13:56.087 "can_share": true 00:13:56.087 } 00:13:56.087 } 00:13:56.087 ], 00:13:56.087 "mp_policy": "active_passive" 00:13:56.087 } 00:13:56.087 } 00:13:56.087 ] 00:13:56.087 18:05:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1603777 00:13:56.087 18:05:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:56.087 18:05:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:56.087 Running I/O for 10 seconds... 00:13:57.486 Latency(us) 00:13:57.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.486 Nvme0n1 : 1.00 35205.00 137.52 0.00 0.00 0.00 0.00 0.00 00:13:57.486 =================================================================================================================== 00:13:57.486 Total : 35205.00 137.52 0.00 0.00 0.00 0.00 0.00 00:13:57.486 00:13:58.056 18:05:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:13:58.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.056 Nvme0n1 : 2.00 35490.50 138.63 0.00 0.00 0.00 0.00 0.00 00:13:58.056 =================================================================================================================== 00:13:58.056 Total : 35490.50 138.63 0.00 0.00 0.00 0.00 0.00 00:13:58.056 00:13:58.314 true 00:13:58.314 18:05:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:13:58.314 18:05:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:58.314 18:05:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:58.314 18:05:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:58.314 18:05:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1603777 00:13:59.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.251 Nvme0n1 : 3.00 35574.67 138.96 0.00 0.00 0.00 0.00 0.00 00:13:59.251 =================================================================================================================== 00:13:59.251 Total : 35574.67 138.96 0.00 0.00 0.00 0.00 0.00 00:13:59.251 00:14:00.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.188 Nvme0n1 : 4.00 35545.00 138.85 0.00 0.00 0.00 0.00 0.00 00:14:00.188 =================================================================================================================== 00:14:00.188 Total : 35545.00 138.85 0.00 0.00 0.00 0.00 0.00 00:14:00.188 00:14:01.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.125 Nvme0n1 : 5.00 35507.20 138.70 0.00 0.00 0.00 0.00 0.00 00:14:01.125 =================================================================================================================== 00:14:01.125 Total : 35507.20 138.70 0.00 0.00 0.00 0.00 0.00 00:14:01.125 00:14:02.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.088 Nvme0n1 : 6.00 35557.50 138.90 0.00 0.00 0.00 0.00 0.00 00:14:02.088 =================================================================================================================== 00:14:02.088 Total : 35557.50 138.90 0.00 0.00 0.00 0.00 0.00 00:14:02.088 00:14:03.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.482 Nvme0n1 : 7.00 35638.86 139.21 0.00 0.00 0.00 0.00 0.00 00:14:03.482 =================================================================================================================== 00:14:03.482 Total : 35638.86 139.21 0.00 0.00 0.00 0.00 0.00 00:14:03.482 00:14:04.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.049 Nvme0n1 : 8.00 35628.38 139.17 0.00 0.00 0.00 0.00 0.00 00:14:04.049 =================================================================================================================== 00:14:04.049 Total : 35628.38 139.17 0.00 0.00 0.00 0.00 0.00 00:14:04.049 00:14:05.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.426 Nvme0n1 : 9.00 35655.44 139.28 0.00 0.00 0.00 0.00 0.00 00:14:05.426 =================================================================================================================== 00:14:05.426 Total : 35655.44 139.28 0.00 0.00 0.00 0.00 0.00 00:14:05.427 00:14:06.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.362 Nvme0n1 : 10.00 35641.60 139.22 0.00 0.00 0.00 0.00 0.00 00:14:06.362 =================================================================================================================== 00:14:06.363 Total : 35641.60 139.22 0.00 0.00 0.00 0.00 0.00 00:14:06.363 00:14:06.363 00:14:06.363 Latency(us) 00:14:06.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.363 Nvme0n1 : 10.00 35643.33 139.23 0.00 0.00 3588.19 2411.72 9961.47 00:14:06.363 =================================================================================================================== 00:14:06.363 Total : 35643.33 139.23 0.00 0.00 3588.19 2411.72 9961.47 00:14:06.363 0 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1603535 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1603535 ']' 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1603535 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603535 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603535' 00:14:06.363 killing process with pid 1603535 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1603535 00:14:06.363 Received shutdown signal, test time was about 10.000000 seconds 00:14:06.363 00:14:06.363 Latency(us) 00:14:06.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.363 =================================================================================================================== 00:14:06.363 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1603535 00:14:06.363 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:06.621 18:06:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:06.879 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:06.879 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:06.879 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:06.880 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:06.880 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:07.137 [2024-07-15 18:06:07.426411] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:07.137 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:07.137 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:07.137 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:07.137 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:07.138 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:07.396 request: 00:14:07.396 { 00:14:07.396 "uuid": "fd3c5d6a-9b59-4461-8e00-8ea717d941eb", 00:14:07.396 "method": "bdev_lvol_get_lvstores", 00:14:07.396 "req_id": 1 00:14:07.396 } 00:14:07.396 Got JSON-RPC error response 00:14:07.396 response: 00:14:07.396 { 00:14:07.396 "code": -19, 00:14:07.396 "message": "No such device" 00:14:07.396 } 00:14:07.396 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:07.396 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:07.396 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:07.396 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:07.396 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:07.654 aio_bdev 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 84c1e974-5fb8-4f68-9863-99dd626462c6 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=84c1e974-5fb8-4f68-9863-99dd626462c6 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:07.654 18:06:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 84c1e974-5fb8-4f68-9863-99dd626462c6 -t 2000 00:14:07.912 [ 00:14:07.912 { 00:14:07.912 "name": "84c1e974-5fb8-4f68-9863-99dd626462c6", 00:14:07.912 "aliases": [ 00:14:07.912 "lvs/lvol" 00:14:07.912 ], 00:14:07.912 "product_name": "Logical Volume", 00:14:07.912 "block_size": 4096, 00:14:07.912 "num_blocks": 38912, 00:14:07.912 "uuid": "84c1e974-5fb8-4f68-9863-99dd626462c6", 00:14:07.912 "assigned_rate_limits": { 00:14:07.912 "rw_ios_per_sec": 0, 00:14:07.912 "rw_mbytes_per_sec": 0, 00:14:07.912 "r_mbytes_per_sec": 0, 00:14:07.912 "w_mbytes_per_sec": 0 00:14:07.912 }, 00:14:07.912 "claimed": false, 00:14:07.912 "zoned": false, 00:14:07.912 "supported_io_types": { 00:14:07.912 "read": true, 00:14:07.912 "write": true, 00:14:07.912 "unmap": true, 00:14:07.912 "flush": false, 00:14:07.912 "reset": true, 00:14:07.912 "nvme_admin": false, 00:14:07.912 "nvme_io": false, 00:14:07.912 "nvme_io_md": false, 00:14:07.912 "write_zeroes": true, 00:14:07.912 "zcopy": false, 00:14:07.912 "get_zone_info": false, 00:14:07.912 "zone_management": false, 00:14:07.912 "zone_append": false, 00:14:07.912 "compare": false, 00:14:07.912 "compare_and_write": false, 00:14:07.912 "abort": false, 00:14:07.912 "seek_hole": true, 00:14:07.912 "seek_data": true, 00:14:07.912 "copy": false, 00:14:07.912 "nvme_iov_md": false 00:14:07.912 }, 00:14:07.912 "driver_specific": { 00:14:07.912 "lvol": { 00:14:07.912 "lvol_store_uuid": "fd3c5d6a-9b59-4461-8e00-8ea717d941eb", 00:14:07.912 "base_bdev": "aio_bdev", 00:14:07.912 "thin_provision": false, 00:14:07.912 "num_allocated_clusters": 38, 00:14:07.912 "snapshot": false, 00:14:07.912 "clone": false, 00:14:07.912 "esnap_clone": false 00:14:07.912 } 00:14:07.912 } 00:14:07.912 } 00:14:07.912 ] 00:14:07.912 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:07.912 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:07.912 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:08.171 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:08.171 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:08.171 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:08.171 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:08.171 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 84c1e974-5fb8-4f68-9863-99dd626462c6 00:14:08.430 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd3c5d6a-9b59-4461-8e00-8ea717d941eb 00:14:08.688 18:06:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:08.688 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:08.688 00:14:08.688 real 0m15.724s 00:14:08.688 user 0m15.546s 00:14:08.688 sys 0m1.232s 00:14:08.688 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.688 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:08.688 ************************************ 00:14:08.688 END TEST lvs_grow_clean 00:14:08.689 ************************************ 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:08.947 ************************************ 00:14:08.947 START TEST lvs_grow_dirty 00:14:08.947 ************************************ 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:08.947 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:09.206 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:09.206 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:09.206 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:09.465 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:09.465 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:09.465 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 lvol 150 00:14:09.465 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:09.465 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:09.465 18:06:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:09.724 [2024-07-15 18:06:10.012801] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:09.724 [2024-07-15 18:06:10.012853] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:09.724 true 00:14:09.724 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:09.724 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:09.983 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:09.983 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.983 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:10.241 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:10.500 [2024-07-15 18:06:10.670923] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1606811 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1606811 /var/tmp/bdevperf.sock 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1606811 ']' 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:10.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.500 18:06:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:10.500 [2024-07-15 18:06:10.894259] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:10.500 [2024-07-15 18:06:10.894313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606811 ] 00:14:10.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.759 [2024-07-15 18:06:10.976899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.759 [2024-07-15 18:06:11.045063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.324 18:06:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.324 18:06:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:11.324 18:06:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:11.586 Nvme0n1 00:14:11.586 18:06:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:11.879 [ 00:14:11.879 { 00:14:11.879 "name": "Nvme0n1", 00:14:11.879 "aliases": [ 00:14:11.879 "cbad8904-9ea6-4f4d-b76b-95f3b5fbd230" 00:14:11.879 ], 00:14:11.879 "product_name": "NVMe disk", 00:14:11.879 "block_size": 4096, 00:14:11.879 "num_blocks": 38912, 00:14:11.879 "uuid": "cbad8904-9ea6-4f4d-b76b-95f3b5fbd230", 00:14:11.879 "assigned_rate_limits": { 00:14:11.879 "rw_ios_per_sec": 0, 00:14:11.879 "rw_mbytes_per_sec": 0, 00:14:11.879 "r_mbytes_per_sec": 0, 00:14:11.879 "w_mbytes_per_sec": 0 00:14:11.879 }, 00:14:11.879 "claimed": false, 00:14:11.879 "zoned": false, 00:14:11.879 "supported_io_types": { 00:14:11.879 "read": true, 00:14:11.879 "write": true, 00:14:11.879 "unmap": true, 00:14:11.879 "flush": true, 00:14:11.879 "reset": true, 00:14:11.879 "nvme_admin": true, 00:14:11.879 "nvme_io": true, 00:14:11.879 "nvme_io_md": false, 00:14:11.879 "write_zeroes": true, 00:14:11.879 "zcopy": false, 00:14:11.879 "get_zone_info": false, 00:14:11.879 "zone_management": false, 00:14:11.879 "zone_append": false, 00:14:11.879 "compare": true, 00:14:11.879 "compare_and_write": true, 00:14:11.879 "abort": true, 00:14:11.879 "seek_hole": false, 00:14:11.879 "seek_data": false, 00:14:11.879 "copy": true, 00:14:11.879 "nvme_iov_md": false 00:14:11.879 }, 00:14:11.879 "memory_domains": [ 00:14:11.879 { 00:14:11.879 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:11.879 "dma_device_type": 0 00:14:11.879 } 00:14:11.879 ], 00:14:11.879 "driver_specific": { 00:14:11.879 "nvme": [ 00:14:11.879 { 00:14:11.879 "trid": { 00:14:11.879 "trtype": "RDMA", 00:14:11.879 "adrfam": "IPv4", 00:14:11.879 "traddr": "192.168.100.8", 00:14:11.879 "trsvcid": "4420", 00:14:11.879 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:11.879 }, 00:14:11.879 "ctrlr_data": { 00:14:11.879 "cntlid": 1, 00:14:11.879 "vendor_id": "0x8086", 00:14:11.879 "model_number": "SPDK bdev Controller", 00:14:11.879 "serial_number": "SPDK0", 00:14:11.879 "firmware_revision": "24.09", 00:14:11.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:11.879 "oacs": { 00:14:11.879 "security": 0, 00:14:11.879 "format": 0, 00:14:11.879 "firmware": 0, 00:14:11.879 "ns_manage": 0 00:14:11.879 }, 00:14:11.879 "multi_ctrlr": true, 00:14:11.879 "ana_reporting": false 00:14:11.879 }, 00:14:11.879 "vs": { 00:14:11.879 "nvme_version": "1.3" 00:14:11.879 }, 00:14:11.879 "ns_data": { 00:14:11.879 "id": 1, 00:14:11.879 "can_share": true 00:14:11.879 } 00:14:11.879 } 00:14:11.879 ], 00:14:11.879 "mp_policy": "active_passive" 00:14:11.879 } 00:14:11.879 } 00:14:11.879 ] 00:14:11.879 18:06:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1607082 00:14:11.879 18:06:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:11.879 18:06:12 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:11.879 Running I/O for 10 seconds... 00:14:13.253 Latency(us) 00:14:13.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.254 Nvme0n1 : 1.00 35039.00 136.87 0.00 0.00 0.00 0.00 0.00 00:14:13.254 =================================================================================================================== 00:14:13.254 Total : 35039.00 136.87 0.00 0.00 0.00 0.00 0.00 00:14:13.254 00:14:13.820 18:06:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:14.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.079 Nvme0n1 : 2.00 35408.50 138.31 0.00 0.00 0.00 0.00 0.00 00:14:14.079 =================================================================================================================== 00:14:14.079 Total : 35408.50 138.31 0.00 0.00 0.00 0.00 0.00 00:14:14.079 00:14:14.079 true 00:14:14.079 18:06:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:14.079 18:06:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:14.336 18:06:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:14.336 18:06:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:14.336 18:06:14 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1607082 00:14:14.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.902 Nvme0n1 : 3.00 35551.67 138.87 0.00 0.00 0.00 0.00 0.00 00:14:14.902 =================================================================================================================== 00:14:14.902 Total : 35551.67 138.87 0.00 0.00 0.00 0.00 0.00 00:14:14.902 00:14:15.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.837 Nvme0n1 : 4.00 35679.75 139.37 0.00 0.00 0.00 0.00 0.00 00:14:15.837 =================================================================================================================== 00:14:15.837 Total : 35679.75 139.37 0.00 0.00 0.00 0.00 0.00 00:14:15.837 00:14:17.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.214 Nvme0n1 : 5.00 35744.80 139.63 0.00 0.00 0.00 0.00 0.00 00:14:17.214 =================================================================================================================== 00:14:17.214 Total : 35744.80 139.63 0.00 0.00 0.00 0.00 0.00 00:14:17.214 00:14:18.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.154 Nvme0n1 : 6.00 35796.33 139.83 0.00 0.00 0.00 0.00 0.00 00:14:18.154 =================================================================================================================== 00:14:18.154 Total : 35796.33 139.83 0.00 0.00 0.00 0.00 0.00 00:14:18.154 00:14:19.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.090 Nvme0n1 : 7.00 35831.57 139.97 0.00 0.00 0.00 0.00 0.00 00:14:19.090 =================================================================================================================== 00:14:19.090 Total : 35831.57 139.97 0.00 0.00 0.00 0.00 0.00 00:14:19.090 00:14:20.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.026 Nvme0n1 : 8.00 35820.50 139.92 0.00 0.00 0.00 0.00 0.00 00:14:20.026 =================================================================================================================== 00:14:20.026 Total : 35820.50 139.92 0.00 0.00 0.00 0.00 0.00 00:14:20.026 00:14:20.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.962 Nvme0n1 : 9.00 35811.89 139.89 0.00 0.00 0.00 0.00 0.00 00:14:20.962 =================================================================================================================== 00:14:20.962 Total : 35811.89 139.89 0.00 0.00 0.00 0.00 0.00 00:14:20.962 00:14:21.899 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.900 Nvme0n1 : 10.00 35839.50 140.00 0.00 0.00 0.00 0.00 0.00 00:14:21.900 =================================================================================================================== 00:14:21.900 Total : 35839.50 140.00 0.00 0.00 0.00 0.00 0.00 00:14:21.900 00:14:21.900 00:14:21.900 Latency(us) 00:14:21.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.900 Nvme0n1 : 10.00 35838.15 139.99 0.00 0.00 3568.67 2293.76 14155.78 00:14:21.900 =================================================================================================================== 00:14:21.900 Total : 35838.15 139.99 0.00 0.00 3568.67 2293.76 14155.78 00:14:21.900 0 00:14:21.900 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1606811 00:14:21.900 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1606811 ']' 00:14:21.900 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1606811 00:14:21.900 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:21.900 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:21.900 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606811 00:14:22.160 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:22.160 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:22.160 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606811' 00:14:22.160 killing process with pid 1606811 00:14:22.160 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1606811 00:14:22.160 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.160 00:14:22.160 Latency(us) 00:14:22.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.160 =================================================================================================================== 00:14:22.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.160 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1606811 00:14:22.160 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:22.419 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:22.677 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:22.677 18:06:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:22.677 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:22.677 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:22.677 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1602962 00:14:22.677 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1602962 00:14:22.950 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1602962 Killed "${NVMF_APP[@]}" "$@" 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1608947 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1608947 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1608947 ']' 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.950 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:22.950 [2024-07-15 18:06:23.160959] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:22.950 [2024-07-15 18:06:23.161045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.950 EAL: No free 2048 kB hugepages reported on node 1 00:14:22.950 [2024-07-15 18:06:23.244657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.950 [2024-07-15 18:06:23.316498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.950 [2024-07-15 18:06:23.316538] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.950 [2024-07-15 18:06:23.316547] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.950 [2024-07-15 18:06:23.316556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.950 [2024-07-15 18:06:23.316563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.950 [2024-07-15 18:06:23.316583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.891 18:06:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.891 [2024-07-15 18:06:24.154183] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:23.891 [2024-07-15 18:06:24.154267] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:23.891 [2024-07-15 18:06:24.154293] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.891 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:24.151 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 -t 2000 00:14:24.151 [ 00:14:24.151 { 00:14:24.151 "name": "cbad8904-9ea6-4f4d-b76b-95f3b5fbd230", 00:14:24.151 "aliases": [ 00:14:24.151 "lvs/lvol" 00:14:24.151 ], 00:14:24.151 "product_name": "Logical Volume", 00:14:24.151 "block_size": 4096, 00:14:24.151 "num_blocks": 38912, 00:14:24.151 "uuid": "cbad8904-9ea6-4f4d-b76b-95f3b5fbd230", 00:14:24.151 "assigned_rate_limits": { 00:14:24.151 "rw_ios_per_sec": 0, 00:14:24.151 "rw_mbytes_per_sec": 0, 00:14:24.151 "r_mbytes_per_sec": 0, 00:14:24.151 "w_mbytes_per_sec": 0 00:14:24.151 }, 00:14:24.151 "claimed": false, 00:14:24.151 "zoned": false, 00:14:24.151 "supported_io_types": { 00:14:24.151 "read": true, 00:14:24.151 "write": true, 00:14:24.151 "unmap": true, 00:14:24.151 "flush": false, 00:14:24.151 "reset": true, 00:14:24.151 "nvme_admin": false, 00:14:24.151 "nvme_io": false, 00:14:24.151 "nvme_io_md": false, 00:14:24.151 "write_zeroes": true, 00:14:24.151 "zcopy": false, 00:14:24.151 "get_zone_info": false, 00:14:24.151 "zone_management": false, 00:14:24.151 "zone_append": false, 00:14:24.151 "compare": false, 00:14:24.151 "compare_and_write": false, 00:14:24.151 "abort": false, 00:14:24.151 "seek_hole": true, 00:14:24.151 "seek_data": true, 00:14:24.151 "copy": false, 00:14:24.151 "nvme_iov_md": false 00:14:24.151 }, 00:14:24.151 "driver_specific": { 00:14:24.151 "lvol": { 00:14:24.151 "lvol_store_uuid": "00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8", 00:14:24.151 "base_bdev": "aio_bdev", 00:14:24.151 "thin_provision": false, 00:14:24.151 "num_allocated_clusters": 38, 00:14:24.151 "snapshot": false, 00:14:24.151 "clone": false, 00:14:24.151 "esnap_clone": false 00:14:24.151 } 00:14:24.151 } 00:14:24.151 } 00:14:24.151 ] 00:14:24.151 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:24.151 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:24.151 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:24.410 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:24.410 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:24.410 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:24.669 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:24.669 18:06:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:24.669 [2024-07-15 18:06:24.998429] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:24.669 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:24.928 request: 00:14:24.928 { 00:14:24.928 "uuid": "00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8", 00:14:24.928 "method": "bdev_lvol_get_lvstores", 00:14:24.928 "req_id": 1 00:14:24.928 } 00:14:24.928 Got JSON-RPC error response 00:14:24.928 response: 00:14:24.928 { 00:14:24.928 "code": -19, 00:14:24.928 "message": "No such device" 00:14:24.928 } 00:14:24.928 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:24.928 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:24.928 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:24.928 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:24.928 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:25.187 aio_bdev 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:25.187 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 -t 2000 00:14:25.446 [ 00:14:25.446 { 00:14:25.446 "name": "cbad8904-9ea6-4f4d-b76b-95f3b5fbd230", 00:14:25.446 "aliases": [ 00:14:25.446 "lvs/lvol" 00:14:25.446 ], 00:14:25.446 "product_name": "Logical Volume", 00:14:25.446 "block_size": 4096, 00:14:25.446 "num_blocks": 38912, 00:14:25.446 "uuid": "cbad8904-9ea6-4f4d-b76b-95f3b5fbd230", 00:14:25.446 "assigned_rate_limits": { 00:14:25.446 "rw_ios_per_sec": 0, 00:14:25.446 "rw_mbytes_per_sec": 0, 00:14:25.446 "r_mbytes_per_sec": 0, 00:14:25.446 "w_mbytes_per_sec": 0 00:14:25.446 }, 00:14:25.446 "claimed": false, 00:14:25.446 "zoned": false, 00:14:25.446 "supported_io_types": { 00:14:25.446 "read": true, 00:14:25.446 "write": true, 00:14:25.446 "unmap": true, 00:14:25.446 "flush": false, 00:14:25.446 "reset": true, 00:14:25.446 "nvme_admin": false, 00:14:25.446 "nvme_io": false, 00:14:25.446 "nvme_io_md": false, 00:14:25.446 "write_zeroes": true, 00:14:25.446 "zcopy": false, 00:14:25.446 "get_zone_info": false, 00:14:25.446 "zone_management": false, 00:14:25.446 "zone_append": false, 00:14:25.446 "compare": false, 00:14:25.446 "compare_and_write": false, 00:14:25.446 "abort": false, 00:14:25.446 "seek_hole": true, 00:14:25.446 "seek_data": true, 00:14:25.446 "copy": false, 00:14:25.446 "nvme_iov_md": false 00:14:25.446 }, 00:14:25.446 "driver_specific": { 00:14:25.446 "lvol": { 00:14:25.446 "lvol_store_uuid": "00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8", 00:14:25.446 "base_bdev": "aio_bdev", 00:14:25.446 "thin_provision": false, 00:14:25.446 "num_allocated_clusters": 38, 00:14:25.446 "snapshot": false, 00:14:25.446 "clone": false, 00:14:25.446 "esnap_clone": false 00:14:25.446 } 00:14:25.446 } 00:14:25.446 } 00:14:25.446 ] 00:14:25.446 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:25.446 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:25.446 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:25.705 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:25.705 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:25.705 18:06:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:25.705 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:25.705 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cbad8904-9ea6-4f4d-b76b-95f3b5fbd230 00:14:25.964 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 00ab9b90-eff6-4bd0-a8bb-5ef4bcae6fd8 00:14:26.228 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:26.228 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:26.228 00:14:26.228 real 0m17.451s 00:14:26.228 user 0m45.171s 00:14:26.228 sys 0m3.487s 00:14:26.228 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.228 18:06:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:26.228 ************************************ 00:14:26.228 END TEST lvs_grow_dirty 00:14:26.228 ************************************ 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:26.541 nvmf_trace.0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:26.541 rmmod nvme_rdma 00:14:26.541 rmmod nvme_fabrics 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1608947 ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1608947 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1608947 ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1608947 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1608947 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1608947' 00:14:26.541 killing process with pid 1608947 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1608947 00:14:26.541 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1608947 00:14:26.799 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.799 18:06:26 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:26.799 00:14:26.799 real 0m43.189s 00:14:26.799 user 1m7.286s 00:14:26.799 sys 0m11.468s 00:14:26.799 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:26.799 18:06:26 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:26.799 ************************************ 00:14:26.799 END TEST nvmf_lvs_grow 00:14:26.799 ************************************ 00:14:26.799 18:06:26 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:26.799 18:06:26 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:26.799 18:06:26 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:26.799 18:06:26 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:26.799 18:06:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:26.799 ************************************ 00:14:26.799 START TEST nvmf_bdev_io_wait 00:14:26.799 ************************************ 00:14:26.799 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:26.799 * Looking for test storage... 00:14:26.799 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:26.799 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.799 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:26.800 18:06:27 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:34.921 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:34.921 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:34.921 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.921 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:34.922 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:34.922 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:34.922 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:34.922 altname enp217s0f0np0 00:14:34.922 altname ens818f0np0 00:14:34.922 inet 192.168.100.8/24 scope global mlx_0_0 00:14:34.922 valid_lft forever preferred_lft forever 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:34.922 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:34.922 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:34.922 altname enp217s0f1np1 00:14:34.922 altname ens818f1np1 00:14:34.922 inet 192.168.100.9/24 scope global mlx_0_1 00:14:34.922 valid_lft forever preferred_lft forever 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:34.922 18:06:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:34.922 192.168.100.9' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:34.922 192.168.100.9' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:34.922 192.168.100.9' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1613481 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1613481 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1613481 ']' 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.922 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:34.922 [2024-07-15 18:06:35.128610] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:34.923 [2024-07-15 18:06:35.128661] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.923 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.923 [2024-07-15 18:06:35.209306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.923 [2024-07-15 18:06:35.279661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.923 [2024-07-15 18:06:35.279705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.923 [2024-07-15 18:06:35.279714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.923 [2024-07-15 18:06:35.279722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.923 [2024-07-15 18:06:35.279745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.923 [2024-07-15 18:06:35.279799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.923 [2024-07-15 18:06:35.279892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.923 [2024-07-15 18:06:35.279978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.923 [2024-07-15 18:06:35.279980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 [2024-07-15 18:06:36.066713] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x975de0/0x97a2d0) succeed. 00:14:35.860 [2024-07-15 18:06:36.075677] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x977420/0x9bb960) succeed. 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 Malloc0 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:35.860 [2024-07-15 18:06:36.250866] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:35.860 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1613748 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1613750 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:35.861 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:36.120 { 00:14:36.120 "params": { 00:14:36.120 "name": "Nvme$subsystem", 00:14:36.120 "trtype": "$TEST_TRANSPORT", 00:14:36.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.120 "adrfam": "ipv4", 00:14:36.120 "trsvcid": "$NVMF_PORT", 00:14:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.120 "hdgst": ${hdgst:-false}, 00:14:36.120 "ddgst": ${ddgst:-false} 00:14:36.120 }, 00:14:36.120 "method": "bdev_nvme_attach_controller" 00:14:36.120 } 00:14:36.120 EOF 00:14:36.120 )") 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1613752 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:36.120 { 00:14:36.120 "params": { 00:14:36.120 "name": "Nvme$subsystem", 00:14:36.120 "trtype": "$TEST_TRANSPORT", 00:14:36.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.120 "adrfam": "ipv4", 00:14:36.120 "trsvcid": "$NVMF_PORT", 00:14:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.120 "hdgst": ${hdgst:-false}, 00:14:36.120 "ddgst": ${ddgst:-false} 00:14:36.120 }, 00:14:36.120 "method": "bdev_nvme_attach_controller" 00:14:36.120 } 00:14:36.120 EOF 00:14:36.120 )") 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1613755 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:36.120 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:36.120 { 00:14:36.120 "params": { 00:14:36.120 "name": "Nvme$subsystem", 00:14:36.120 "trtype": "$TEST_TRANSPORT", 00:14:36.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.120 "adrfam": "ipv4", 00:14:36.120 "trsvcid": "$NVMF_PORT", 00:14:36.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.120 "hdgst": ${hdgst:-false}, 00:14:36.120 "ddgst": ${ddgst:-false} 00:14:36.120 }, 00:14:36.120 "method": "bdev_nvme_attach_controller" 00:14:36.120 } 00:14:36.120 EOF 00:14:36.120 )") 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:36.121 { 00:14:36.121 "params": { 00:14:36.121 "name": "Nvme$subsystem", 00:14:36.121 "trtype": "$TEST_TRANSPORT", 00:14:36.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.121 "adrfam": "ipv4", 00:14:36.121 "trsvcid": "$NVMF_PORT", 00:14:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.121 "hdgst": ${hdgst:-false}, 00:14:36.121 "ddgst": ${ddgst:-false} 00:14:36.121 }, 00:14:36.121 "method": "bdev_nvme_attach_controller" 00:14:36.121 } 00:14:36.121 EOF 00:14:36.121 )") 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1613748 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:36.121 "params": { 00:14:36.121 "name": "Nvme1", 00:14:36.121 "trtype": "rdma", 00:14:36.121 "traddr": "192.168.100.8", 00:14:36.121 "adrfam": "ipv4", 00:14:36.121 "trsvcid": "4420", 00:14:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.121 "hdgst": false, 00:14:36.121 "ddgst": false 00:14:36.121 }, 00:14:36.121 "method": "bdev_nvme_attach_controller" 00:14:36.121 }' 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:36.121 "params": { 00:14:36.121 "name": "Nvme1", 00:14:36.121 "trtype": "rdma", 00:14:36.121 "traddr": "192.168.100.8", 00:14:36.121 "adrfam": "ipv4", 00:14:36.121 "trsvcid": "4420", 00:14:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.121 "hdgst": false, 00:14:36.121 "ddgst": false 00:14:36.121 }, 00:14:36.121 "method": "bdev_nvme_attach_controller" 00:14:36.121 }' 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:36.121 "params": { 00:14:36.121 "name": "Nvme1", 00:14:36.121 "trtype": "rdma", 00:14:36.121 "traddr": "192.168.100.8", 00:14:36.121 "adrfam": "ipv4", 00:14:36.121 "trsvcid": "4420", 00:14:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.121 "hdgst": false, 00:14:36.121 "ddgst": false 00:14:36.121 }, 00:14:36.121 "method": "bdev_nvme_attach_controller" 00:14:36.121 }' 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:36.121 18:06:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:36.121 "params": { 00:14:36.121 "name": "Nvme1", 00:14:36.121 "trtype": "rdma", 00:14:36.121 "traddr": "192.168.100.8", 00:14:36.121 "adrfam": "ipv4", 00:14:36.121 "trsvcid": "4420", 00:14:36.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.121 "hdgst": false, 00:14:36.121 "ddgst": false 00:14:36.121 }, 00:14:36.121 "method": "bdev_nvme_attach_controller" 00:14:36.121 }' 00:14:36.121 [2024-07-15 18:06:36.302919] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:36.121 [2024-07-15 18:06:36.302975] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:36.121 [2024-07-15 18:06:36.303992] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:36.121 [2024-07-15 18:06:36.304046] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:36.121 [2024-07-15 18:06:36.304499] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:36.121 [2024-07-15 18:06:36.304546] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:36.121 [2024-07-15 18:06:36.306810] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:36.121 [2024-07-15 18:06:36.306854] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:36.121 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.121 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.121 [2024-07-15 18:06:36.511778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.380 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.380 [2024-07-15 18:06:36.585709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:36.380 [2024-07-15 18:06:36.599716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.380 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.380 [2024-07-15 18:06:36.674444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:36.380 [2024-07-15 18:06:36.698833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.380 [2024-07-15 18:06:36.766458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.640 [2024-07-15 18:06:36.783227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:36.640 [2024-07-15 18:06:36.840024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:36.640 Running I/O for 1 seconds... 00:14:36.640 Running I/O for 1 seconds... 00:14:36.640 Running I/O for 1 seconds... 00:14:36.640 Running I/O for 1 seconds... 00:14:37.577 00:14:37.577 Latency(us) 00:14:37.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.577 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:37.577 Nvme1n1 : 1.00 18437.99 72.02 0.00 0.00 6921.41 4115.66 14155.78 00:14:37.577 =================================================================================================================== 00:14:37.577 Total : 18437.99 72.02 0.00 0.00 6921.41 4115.66 14155.78 00:14:37.577 00:14:37.577 Latency(us) 00:14:37.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.577 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:37.577 Nvme1n1 : 1.00 17364.78 67.83 0.00 0.00 7350.41 4692.38 16777.22 00:14:37.577 =================================================================================================================== 00:14:37.578 Total : 17364.78 67.83 0.00 0.00 7350.41 4692.38 16777.22 00:14:37.578 00:14:37.578 Latency(us) 00:14:37.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.578 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:37.578 Nvme1n1 : 1.00 261941.73 1023.21 0.00 0.00 486.33 197.43 1821.90 00:14:37.578 =================================================================================================================== 00:14:37.578 Total : 261941.73 1023.21 0.00 0.00 486.33 197.43 1821.90 00:14:37.837 00:14:37.837 Latency(us) 00:14:37.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.837 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:37.837 Nvme1n1 : 1.00 15880.94 62.03 0.00 0.00 8040.74 3774.87 20342.37 00:14:37.837 =================================================================================================================== 00:14:37.837 Total : 15880.94 62.03 0.00 0.00 8040.74 3774.87 20342.37 00:14:37.837 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1613750 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1613752 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1613755 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:38.096 rmmod nvme_rdma 00:14:38.096 rmmod nvme_fabrics 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1613481 ']' 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1613481 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1613481 ']' 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1613481 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1613481 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1613481' 00:14:38.096 killing process with pid 1613481 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1613481 00:14:38.096 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1613481 00:14:38.355 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.355 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:38.355 00:14:38.355 real 0m11.640s 00:14:38.355 user 0m21.226s 00:14:38.355 sys 0m7.514s 00:14:38.355 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.355 18:06:38 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:38.355 ************************************ 00:14:38.355 END TEST nvmf_bdev_io_wait 00:14:38.355 ************************************ 00:14:38.355 18:06:38 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:38.355 18:06:38 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:14:38.355 18:06:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.355 18:06:38 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.355 18:06:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:38.355 ************************************ 00:14:38.355 START TEST nvmf_queue_depth 00:14:38.355 ************************************ 00:14:38.355 18:06:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:14:38.615 * Looking for test storage... 00:14:38.615 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.615 18:06:38 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:46.742 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:46.742 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.742 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:46.742 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:46.743 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:46.743 18:06:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:46.743 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:46.743 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:46.743 altname enp217s0f0np0 00:14:46.743 altname ens818f0np0 00:14:46.743 inet 192.168.100.8/24 scope global mlx_0_0 00:14:46.743 valid_lft forever preferred_lft forever 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:46.743 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:46.743 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:46.743 altname enp217s0f1np1 00:14:46.743 altname ens818f1np1 00:14:46.743 inet 192.168.100.9/24 scope global mlx_0_1 00:14:46.743 valid_lft forever preferred_lft forever 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:46.743 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:47.002 192.168.100.9' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:47.002 192.168.100.9' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:47.002 192.168.100.9' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1618209 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1618209 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1618209 ']' 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.002 18:06:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.002 [2024-07-15 18:06:47.287343] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:47.002 [2024-07-15 18:06:47.287398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.002 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.002 [2024-07-15 18:06:47.373549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.261 [2024-07-15 18:06:47.453679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.261 [2024-07-15 18:06:47.453718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.261 [2024-07-15 18:06:47.453728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.261 [2024-07-15 18:06:47.453736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.261 [2024-07-15 18:06:47.453744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.261 [2024-07-15 18:06:47.453772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:47.829 [2024-07-15 18:06:48.166211] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2451e20/0x2456310) succeed. 00:14:47.829 [2024-07-15 18:06:48.174843] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2453320/0x24979a0) succeed. 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.829 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.088 Malloc0 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.088 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.089 [2024-07-15 18:06:48.262235] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1618489 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1618489 /var/tmp/bdevperf.sock 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1618489 ']' 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.089 18:06:48 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:48.089 [2024-07-15 18:06:48.311945] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:14:48.089 [2024-07-15 18:06:48.311989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1618489 ] 00:14:48.089 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.089 [2024-07-15 18:06:48.389578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.089 [2024-07-15 18:06:48.459242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:49.023 NVMe0n1 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.023 18:06:49 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.023 Running I/O for 10 seconds... 00:14:59.072 00:14:59.072 Latency(us) 00:14:59.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.073 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:59.073 Verification LBA range: start 0x0 length 0x4000 00:14:59.073 NVMe0n1 : 10.04 18263.16 71.34 0.00 0.00 55932.94 22020.10 36071.01 00:14:59.073 =================================================================================================================== 00:14:59.073 Total : 18263.16 71.34 0.00 0.00 55932.94 22020.10 36071.01 00:14:59.073 0 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1618489 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1618489 ']' 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1618489 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618489 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618489' 00:14:59.073 killing process with pid 1618489 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1618489 00:14:59.073 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.073 00:14:59.073 Latency(us) 00:14:59.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.073 =================================================================================================================== 00:14:59.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.073 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1618489 00:14:59.331 18:06:59 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:59.331 18:06:59 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:59.331 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.331 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:59.331 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:59.332 rmmod nvme_rdma 00:14:59.332 rmmod nvme_fabrics 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1618209 ']' 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1618209 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1618209 ']' 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1618209 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1618209 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1618209' 00:14:59.332 killing process with pid 1618209 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1618209 00:14:59.332 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1618209 00:14:59.590 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.590 18:06:59 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:59.590 00:14:59.590 real 0m21.185s 00:14:59.590 user 0m26.417s 00:14:59.590 sys 0m7.113s 00:14:59.590 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.590 18:06:59 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:59.590 ************************************ 00:14:59.590 END TEST nvmf_queue_depth 00:14:59.590 ************************************ 00:14:59.590 18:06:59 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:59.590 18:06:59 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:59.590 18:06:59 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.590 18:06:59 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.590 18:06:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:59.850 ************************************ 00:14:59.850 START TEST nvmf_target_multipath 00:14:59.850 ************************************ 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:59.850 * Looking for test storage... 00:14:59.850 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.850 18:07:00 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:09.832 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:09.832 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:09.832 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:09.832 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:09.832 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:09.833 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:09.833 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:09.833 altname enp217s0f0np0 00:15:09.833 altname ens818f0np0 00:15:09.833 inet 192.168.100.8/24 scope global mlx_0_0 00:15:09.833 valid_lft forever preferred_lft forever 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:09.833 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:09.833 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:09.833 altname enp217s0f1np1 00:15:09.833 altname ens818f1np1 00:15:09.833 inet 192.168.100.9/24 scope global mlx_0_1 00:15:09.833 valid_lft forever preferred_lft forever 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:09.833 192.168.100.9' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:09.833 192.168.100.9' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:09.833 192.168.100.9' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:15:09.833 run this test only with TCP transport for now 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:09.833 rmmod nvme_rdma 00:15:09.833 rmmod nvme_fabrics 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:09.833 00:15:09.833 real 0m8.705s 00:15:09.833 user 0m2.377s 00:15:09.833 sys 0m6.546s 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.833 18:07:08 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:09.833 ************************************ 00:15:09.833 END TEST nvmf_target_multipath 00:15:09.833 ************************************ 00:15:09.833 18:07:08 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:09.833 18:07:08 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:09.833 18:07:08 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:09.833 18:07:08 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.833 18:07:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:09.833 ************************************ 00:15:09.833 START TEST nvmf_zcopy 00:15:09.833 ************************************ 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:09.834 * Looking for test storage... 00:15:09.834 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.834 18:07:08 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.953 18:07:16 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:17.953 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:17.953 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:17.953 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:17.954 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:17.954 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:17.954 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.954 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:17.954 altname enp217s0f0np0 00:15:17.954 altname ens818f0np0 00:15:17.954 inet 192.168.100.8/24 scope global mlx_0_0 00:15:17.954 valid_lft forever preferred_lft forever 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:17.954 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:17.954 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:17.954 altname enp217s0f1np1 00:15:17.954 altname ens818f1np1 00:15:17.954 inet 192.168.100.9/24 scope global mlx_0_1 00:15:17.954 valid_lft forever preferred_lft forever 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:17.954 192.168.100.9' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:17.954 192.168.100.9' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:17.954 192.168.100.9' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1628426 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1628426 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1628426 ']' 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:17.954 18:07:17 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.955 [2024-07-15 18:07:17.288296] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:15:17.955 [2024-07-15 18:07:17.288347] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.955 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.955 [2024-07-15 18:07:17.374545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.955 [2024-07-15 18:07:17.447003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.955 [2024-07-15 18:07:17.447048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.955 [2024-07-15 18:07:17.447057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.955 [2024-07-15 18:07:17.447066] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.955 [2024-07-15 18:07:17.447073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.955 [2024-07-15 18:07:17.447094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:15:17.955 Unsupported transport: rdma 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # type=--id 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # id=0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:17.955 nvmf_trace.0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@821 -- # return 0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:17.955 rmmod nvme_rdma 00:15:17.955 rmmod nvme_fabrics 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1628426 ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1628426 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1628426 ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1628426 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1628426 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1628426' 00:15:17.955 killing process with pid 1628426 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1628426 00:15:17.955 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1628426 00:15:18.215 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.215 18:07:18 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:18.215 00:15:18.215 real 0m9.670s 00:15:18.215 user 0m3.740s 00:15:18.215 sys 0m6.699s 00:15:18.215 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:18.215 18:07:18 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:18.215 ************************************ 00:15:18.215 END TEST nvmf_zcopy 00:15:18.215 ************************************ 00:15:18.215 18:07:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:18.215 18:07:18 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:18.215 18:07:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:18.215 18:07:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.215 18:07:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:18.215 ************************************ 00:15:18.215 START TEST nvmf_nmic 00:15:18.215 ************************************ 00:15:18.215 18:07:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:18.474 * Looking for test storage... 00:15:18.474 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:18.474 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:18.475 18:07:18 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.644 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:26.645 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:26.645 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:26.645 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:26.645 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:26.645 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:26.645 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:26.645 altname enp217s0f0np0 00:15:26.645 altname ens818f0np0 00:15:26.645 inet 192.168.100.8/24 scope global mlx_0_0 00:15:26.645 valid_lft forever preferred_lft forever 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:26.645 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:26.645 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:26.645 altname enp217s0f1np1 00:15:26.645 altname ens818f1np1 00:15:26.645 inet 192.168.100.9/24 scope global mlx_0_1 00:15:26.645 valid_lft forever preferred_lft forever 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.645 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:26.646 192.168.100.9' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:26.646 192.168.100.9' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:26.646 192.168.100.9' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1632696 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1632696 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1632696 ']' 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.646 18:07:26 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:26.646 [2024-07-15 18:07:26.873317] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:15:26.646 [2024-07-15 18:07:26.873370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.646 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.646 [2024-07-15 18:07:26.952977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.646 [2024-07-15 18:07:27.028979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.646 [2024-07-15 18:07:27.029023] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.646 [2024-07-15 18:07:27.029033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.646 [2024-07-15 18:07:27.029042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.646 [2024-07-15 18:07:27.029050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.646 [2024-07-15 18:07:27.029097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.646 [2024-07-15 18:07:27.029211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.646 [2024-07-15 18:07:27.029236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.646 [2024-07-15 18:07:27.029237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.591 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.591 [2024-07-15 18:07:27.761742] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1089f80/0x108e470) succeed. 00:15:27.591 [2024-07-15 18:07:27.771018] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x108b5c0/0x10cfb00) succeed. 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 Malloc0 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 [2024-07-15 18:07:27.937703] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:27.592 test case1: single bdev can't be used in multiple subsystems 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 [2024-07-15 18:07:27.961461] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:27.592 [2024-07-15 18:07:27.961482] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:27.592 [2024-07-15 18:07:27.961491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.592 request: 00:15:27.592 { 00:15:27.592 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:27.592 "namespace": { 00:15:27.592 "bdev_name": "Malloc0", 00:15:27.592 "no_auto_visible": false 00:15:27.592 }, 00:15:27.592 "method": "nvmf_subsystem_add_ns", 00:15:27.592 "req_id": 1 00:15:27.592 } 00:15:27.592 Got JSON-RPC error response 00:15:27.592 response: 00:15:27.592 { 00:15:27.592 "code": -32602, 00:15:27.592 "message": "Invalid parameters" 00:15:27.592 } 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:27.592 Adding namespace failed - expected result. 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:27.592 test case2: host connect to nvmf target in multiple paths 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:27.592 [2024-07-15 18:07:27.977530] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.592 18:07:27 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:28.972 18:07:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:15:29.540 18:07:29 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:29.540 18:07:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:29.540 18:07:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:29.540 18:07:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:29.540 18:07:29 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:32.075 18:07:31 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:32.075 [global] 00:15:32.075 thread=1 00:15:32.075 invalidate=1 00:15:32.075 rw=write 00:15:32.075 time_based=1 00:15:32.075 runtime=1 00:15:32.075 ioengine=libaio 00:15:32.075 direct=1 00:15:32.075 bs=4096 00:15:32.075 iodepth=1 00:15:32.075 norandommap=0 00:15:32.075 numjobs=1 00:15:32.075 00:15:32.075 verify_dump=1 00:15:32.075 verify_backlog=512 00:15:32.075 verify_state_save=0 00:15:32.075 do_verify=1 00:15:32.075 verify=crc32c-intel 00:15:32.075 [job0] 00:15:32.075 filename=/dev/nvme0n1 00:15:32.075 Could not set queue depth (nvme0n1) 00:15:32.075 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:32.075 fio-3.35 00:15:32.075 Starting 1 thread 00:15:33.470 00:15:33.470 job0: (groupid=0, jobs=1): err= 0: pid=1633845: Mon Jul 15 18:07:33 2024 00:15:33.470 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:15:33.470 slat (nsec): min=8173, max=30382, avg=8780.31, stdev=886.92 00:15:33.470 clat (nsec): min=40350, max=85079, avg=58736.81, stdev=3533.07 00:15:33.470 lat (nsec): min=58799, max=93621, avg=67517.12, stdev=3577.89 00:15:33.470 clat percentiles (nsec): 00:15:33.470 | 1.00th=[51968], 5.00th=[53504], 10.00th=[54528], 20.00th=[55552], 00:15:33.470 | 30.00th=[56576], 40.00th=[57600], 50.00th=[58624], 60.00th=[59648], 00:15:33.470 | 70.00th=[60672], 80.00th=[61696], 90.00th=[63232], 95.00th=[64768], 00:15:33.470 | 99.00th=[68096], 99.50th=[70144], 99.90th=[76288], 99.95th=[80384], 00:15:33.470 | 99.99th=[85504] 00:15:33.470 write: IOPS=7277, BW=28.4MiB/s (29.8MB/s)(28.5MiB/1001msec); 0 zone resets 00:15:33.470 slat (nsec): min=10022, max=83215, avg=10744.82, stdev=1352.36 00:15:33.470 clat (nsec): min=39793, max=78187, avg=56462.91, stdev=3531.12 00:15:33.470 lat (usec): min=58, max=158, avg=67.21, stdev= 3.76 00:15:33.470 clat percentiles (nsec): 00:15:33.470 | 1.00th=[49408], 5.00th=[50944], 10.00th=[51968], 20.00th=[53504], 00:15:33.470 | 30.00th=[54528], 40.00th=[55552], 50.00th=[56064], 60.00th=[57088], 00:15:33.470 | 70.00th=[58112], 80.00th=[59648], 90.00th=[61184], 95.00th=[62720], 00:15:33.470 | 99.00th=[65280], 99.50th=[66048], 99.90th=[70144], 99.95th=[73216], 00:15:33.470 | 99.99th=[78336] 00:15:33.470 bw ( KiB/s): min=29224, max=29224, per=100.00%, avg=29224.00, stdev= 0.00, samples=1 00:15:33.470 iops : min= 7306, max= 7306, avg=7306.00, stdev= 0.00, samples=1 00:15:33.470 lat (usec) : 50=0.98%, 100=99.02% 00:15:33.470 cpu : usr=10.30%, sys=18.50%, ctx=14453, majf=0, minf=2 00:15:33.470 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:33.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.470 issued rwts: total=7168,7285,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.470 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:33.470 00:15:33.470 Run status group 0 (all jobs): 00:15:33.471 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:15:33.471 WRITE: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=28.5MiB (29.8MB), run=1001-1001msec 00:15:33.471 00:15:33.471 Disk stats (read/write): 00:15:33.471 nvme0n1: ios=6373/6656, merge=0/0, ticks=326/316, in_queue=642, util=90.57% 00:15:33.471 18:07:33 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:35.376 rmmod nvme_rdma 00:15:35.376 rmmod nvme_fabrics 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1632696 ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1632696 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1632696 ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1632696 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1632696 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1632696' 00:15:35.376 killing process with pid 1632696 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1632696 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1632696 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:35.376 00:15:35.376 real 0m17.219s 00:15:35.376 user 0m44.884s 00:15:35.376 sys 0m7.301s 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:35.376 18:07:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:35.376 ************************************ 00:15:35.376 END TEST nvmf_nmic 00:15:35.376 ************************************ 00:15:35.637 18:07:35 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:35.637 18:07:35 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:15:35.637 18:07:35 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:35.637 18:07:35 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.637 18:07:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:35.637 ************************************ 00:15:35.637 START TEST nvmf_fio_target 00:15:35.637 ************************************ 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:15:35.637 * Looking for test storage... 00:15:35.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:35.637 18:07:35 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:43.763 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:43.763 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:43.763 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:43.763 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.763 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:43.764 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:43.764 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:43.764 altname enp217s0f0np0 00:15:43.764 altname ens818f0np0 00:15:43.764 inet 192.168.100.8/24 scope global mlx_0_0 00:15:43.764 valid_lft forever preferred_lft forever 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:43.764 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:43.764 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:43.764 altname enp217s0f1np1 00:15:43.764 altname ens818f1np1 00:15:43.764 inet 192.168.100.9/24 scope global mlx_0_1 00:15:43.764 valid_lft forever preferred_lft forever 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:43.764 192.168.100.9' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:43.764 192.168.100.9' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:43.764 192.168.100.9' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1638295 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1638295 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1638295 ']' 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.764 18:07:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.764 [2024-07-15 18:07:43.959745] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:15:43.764 [2024-07-15 18:07:43.959795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.764 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.764 [2024-07-15 18:07:44.040050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.764 [2024-07-15 18:07:44.113855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.764 [2024-07-15 18:07:44.113894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.764 [2024-07-15 18:07:44.113903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.764 [2024-07-15 18:07:44.113912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.764 [2024-07-15 18:07:44.113919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.764 [2024-07-15 18:07:44.113966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.764 [2024-07-15 18:07:44.114066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.764 [2024-07-15 18:07:44.114081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.764 [2024-07-15 18:07:44.114083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.705 18:07:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:44.705 [2024-07-15 18:07:44.985572] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21e0f80/0x21e5470) succeed. 00:15:44.705 [2024-07-15 18:07:44.994877] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21e25c0/0x2226b00) succeed. 00:15:44.965 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:44.965 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:44.965 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:45.225 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:45.225 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:45.484 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:45.484 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:45.744 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:45.744 18:07:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:45.744 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:46.013 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:46.013 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:46.271 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:46.271 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:46.530 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:46.530 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:46.530 18:07:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:46.789 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:46.789 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:47.087 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:47.088 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:47.088 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:47.371 [2024-07-15 18:07:47.586161] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:47.371 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:47.630 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:47.630 18:07:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:48.563 18:07:48 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:48.563 18:07:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.563 18:07:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.563 18:07:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:48.563 18:07:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:48.563 18:07:48 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:51.098 18:07:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:51.098 [global] 00:15:51.098 thread=1 00:15:51.098 invalidate=1 00:15:51.098 rw=write 00:15:51.098 time_based=1 00:15:51.098 runtime=1 00:15:51.098 ioengine=libaio 00:15:51.098 direct=1 00:15:51.098 bs=4096 00:15:51.098 iodepth=1 00:15:51.098 norandommap=0 00:15:51.098 numjobs=1 00:15:51.098 00:15:51.098 verify_dump=1 00:15:51.098 verify_backlog=512 00:15:51.098 verify_state_save=0 00:15:51.098 do_verify=1 00:15:51.098 verify=crc32c-intel 00:15:51.098 [job0] 00:15:51.098 filename=/dev/nvme0n1 00:15:51.098 [job1] 00:15:51.098 filename=/dev/nvme0n2 00:15:51.098 [job2] 00:15:51.098 filename=/dev/nvme0n3 00:15:51.098 [job3] 00:15:51.098 filename=/dev/nvme0n4 00:15:51.098 Could not set queue depth (nvme0n1) 00:15:51.098 Could not set queue depth (nvme0n2) 00:15:51.098 Could not set queue depth (nvme0n3) 00:15:51.098 Could not set queue depth (nvme0n4) 00:15:51.098 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.098 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.098 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.098 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:51.098 fio-3.35 00:15:51.098 Starting 4 threads 00:15:52.477 00:15:52.477 job0: (groupid=0, jobs=1): err= 0: pid=1639838: Mon Jul 15 18:07:52 2024 00:15:52.477 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:15:52.477 slat (nsec): min=8268, max=32696, avg=10624.76, stdev=3181.80 00:15:52.477 clat (usec): min=65, max=352, avg=125.00, stdev=24.04 00:15:52.477 lat (usec): min=77, max=361, avg=135.62, stdev=23.61 00:15:52.477 clat percentiles (usec): 00:15:52.477 | 1.00th=[ 79], 5.00th=[ 87], 10.00th=[ 97], 20.00th=[ 104], 00:15:52.477 | 30.00th=[ 110], 40.00th=[ 116], 50.00th=[ 126], 60.00th=[ 135], 00:15:52.477 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:15:52.477 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 200], 99.95th=[ 241], 00:15:52.477 | 99.99th=[ 355] 00:15:52.477 write: IOPS=3826, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1001msec); 0 zone resets 00:15:52.477 slat (nsec): min=8759, max=45414, avg=13221.39, stdev=3801.05 00:15:52.477 clat (usec): min=64, max=194, avg=116.28, stdev=22.26 00:15:52.477 lat (usec): min=77, max=207, avg=129.50, stdev=22.28 00:15:52.477 clat percentiles (usec): 00:15:52.477 | 1.00th=[ 75], 5.00th=[ 84], 10.00th=[ 92], 20.00th=[ 98], 00:15:52.477 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 113], 60.00th=[ 121], 00:15:52.477 | 70.00th=[ 129], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 157], 00:15:52.477 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 190], 00:15:52.477 | 99.99th=[ 196] 00:15:52.477 bw ( KiB/s): min=15720, max=15720, per=23.94%, avg=15720.00, stdev= 0.00, samples=1 00:15:52.477 iops : min= 3930, max= 3930, avg=3930.00, stdev= 0.00, samples=1 00:15:52.477 lat (usec) : 100=19.85%, 250=80.13%, 500=0.01% 00:15:52.477 cpu : usr=5.60%, sys=10.30%, ctx=7414, majf=0, minf=2 00:15:52.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.477 issued rwts: total=3584,3830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:52.477 job1: (groupid=0, jobs=1): err= 0: pid=1639839: Mon Jul 15 18:07:52 2024 00:15:52.477 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:15:52.477 slat (nsec): min=8268, max=30449, avg=9461.81, stdev=2119.74 00:15:52.477 clat (usec): min=65, max=289, avg=126.20, stdev=22.85 00:15:52.477 lat (usec): min=74, max=298, avg=135.67, stdev=23.09 00:15:52.477 clat percentiles (usec): 00:15:52.477 | 1.00th=[ 77], 5.00th=[ 90], 10.00th=[ 101], 20.00th=[ 108], 00:15:52.477 | 30.00th=[ 112], 40.00th=[ 117], 50.00th=[ 126], 60.00th=[ 135], 00:15:52.477 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 163], 00:15:52.477 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 204], 99.95th=[ 225], 00:15:52.477 | 99.99th=[ 289] 00:15:52.477 write: IOPS=3863, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec); 0 zone resets 00:15:52.477 slat (nsec): min=10028, max=39702, avg=11355.66, stdev=2076.23 00:15:52.477 clat (usec): min=63, max=187, avg=117.47, stdev=20.88 00:15:52.477 lat (usec): min=74, max=215, avg=128.82, stdev=21.09 00:15:52.477 clat percentiles (usec): 00:15:52.477 | 1.00th=[ 75], 5.00th=[ 87], 10.00th=[ 95], 20.00th=[ 100], 00:15:52.477 | 30.00th=[ 105], 40.00th=[ 110], 50.00th=[ 115], 60.00th=[ 122], 00:15:52.477 | 70.00th=[ 130], 80.00th=[ 137], 90.00th=[ 145], 95.00th=[ 155], 00:15:52.477 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 188], 00:15:52.477 | 99.99th=[ 188] 00:15:52.477 bw ( KiB/s): min=15816, max=15816, per=24.09%, avg=15816.00, stdev= 0.00, samples=1 00:15:52.477 iops : min= 3954, max= 3954, avg=3954.00, stdev= 0.00, samples=1 00:15:52.477 lat (usec) : 100=14.70%, 250=85.29%, 500=0.01% 00:15:52.477 cpu : usr=4.40%, sys=11.20%, ctx=7451, majf=0, minf=1 00:15:52.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.477 issued rwts: total=3584,3867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:52.477 job2: (groupid=0, jobs=1): err= 0: pid=1639840: Mon Jul 15 18:07:52 2024 00:15:52.477 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:15:52.477 slat (nsec): min=8334, max=34966, avg=10022.08, stdev=3096.78 00:15:52.477 clat (usec): min=74, max=207, avg=129.04, stdev=22.97 00:15:52.478 lat (usec): min=83, max=216, avg=139.06, stdev=23.97 00:15:52.478 clat percentiles (usec): 00:15:52.478 | 1.00th=[ 82], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 114], 00:15:52.478 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 131], 60.00th=[ 137], 00:15:52.478 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 165], 00:15:52.478 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 198], 99.95th=[ 204], 00:15:52.478 | 99.99th=[ 208] 00:15:52.478 write: IOPS=3705, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1001msec); 0 zone resets 00:15:52.478 slat (nsec): min=10280, max=61000, avg=11894.10, stdev=2993.33 00:15:52.478 clat (usec): min=69, max=232, avg=118.97, stdev=20.57 00:15:52.478 lat (usec): min=81, max=255, avg=130.86, stdev=21.74 00:15:52.478 clat percentiles (usec): 00:15:52.478 | 1.00th=[ 78], 5.00th=[ 84], 10.00th=[ 88], 20.00th=[ 101], 00:15:52.478 | 30.00th=[ 112], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 125], 00:15:52.478 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 145], 95.00th=[ 153], 00:15:52.478 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 190], 00:15:52.478 | 99.99th=[ 233] 00:15:52.478 bw ( KiB/s): min=15648, max=15648, per=23.83%, avg=15648.00, stdev= 0.00, samples=1 00:15:52.478 iops : min= 3912, max= 3912, avg=3912.00, stdev= 0.00, samples=1 00:15:52.478 lat (usec) : 100=17.54%, 250=82.46% 00:15:52.478 cpu : usr=4.70%, sys=10.50%, ctx=7293, majf=0, minf=1 00:15:52.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.478 issued rwts: total=3584,3709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:52.478 job3: (groupid=0, jobs=1): err= 0: pid=1639841: Mon Jul 15 18:07:52 2024 00:15:52.478 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:15:52.478 slat (nsec): min=8389, max=29957, avg=9035.29, stdev=918.19 00:15:52.478 clat (usec): min=72, max=163, avg=91.95, stdev=15.12 00:15:52.478 lat (usec): min=81, max=172, avg=100.99, stdev=15.16 00:15:52.478 clat percentiles (usec): 00:15:52.478 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 82], 00:15:52.478 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:15:52.478 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 120], 95.00th=[ 124], 00:15:52.478 | 99.00th=[ 135], 99.50th=[ 147], 99.90th=[ 161], 99.95th=[ 163], 00:15:52.478 | 99.99th=[ 163] 00:15:52.478 write: IOPS=5020, BW=19.6MiB/s (20.6MB/s)(19.6MiB/1001msec); 0 zone resets 00:15:52.478 slat (nsec): min=10345, max=39380, avg=11000.15, stdev=1077.05 00:15:52.478 clat (usec): min=69, max=172, avg=91.89, stdev=17.56 00:15:52.478 lat (usec): min=80, max=183, avg=102.89, stdev=17.66 00:15:52.478 clat percentiles (usec): 00:15:52.478 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:15:52.478 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 87], 00:15:52.478 | 70.00th=[ 93], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 125], 00:15:52.478 | 99.00th=[ 133], 99.50th=[ 137], 99.90th=[ 165], 99.95th=[ 169], 00:15:52.478 | 99.99th=[ 174] 00:15:52.478 bw ( KiB/s): min=20480, max=20480, per=31.19%, avg=20480.00, stdev= 0.00, samples=1 00:15:52.478 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:52.478 lat (usec) : 100=75.81%, 250=24.19% 00:15:52.478 cpu : usr=7.50%, sys=12.10%, ctx=9634, majf=0, minf=1 00:15:52.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.478 issued rwts: total=4608,5026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:52.478 00:15:52.478 Run status group 0 (all jobs): 00:15:52.478 READ: bw=59.9MiB/s (62.9MB/s), 14.0MiB/s-18.0MiB/s (14.7MB/s-18.9MB/s), io=60.0MiB (62.9MB), run=1001-1001msec 00:15:52.478 WRITE: bw=64.1MiB/s (67.2MB/s), 14.5MiB/s-19.6MiB/s (15.2MB/s-20.6MB/s), io=64.2MiB (67.3MB), run=1001-1001msec 00:15:52.478 00:15:52.478 Disk stats (read/write): 00:15:52.478 nvme0n1: ios=3121/3076, merge=0/0, ticks=385/319, in_queue=704, util=84.57% 00:15:52.478 nvme0n2: ios=3072/3104, merge=0/0, ticks=363/343, in_queue=706, util=85.50% 00:15:52.478 nvme0n3: ios=2817/3072, merge=0/0, ticks=359/347, in_queue=706, util=88.48% 00:15:52.478 nvme0n4: ios=3852/4096, merge=0/0, ticks=334/344, in_queue=678, util=89.52% 00:15:52.478 18:07:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:52.478 [global] 00:15:52.478 thread=1 00:15:52.478 invalidate=1 00:15:52.478 rw=randwrite 00:15:52.478 time_based=1 00:15:52.478 runtime=1 00:15:52.478 ioengine=libaio 00:15:52.478 direct=1 00:15:52.478 bs=4096 00:15:52.478 iodepth=1 00:15:52.478 norandommap=0 00:15:52.478 numjobs=1 00:15:52.478 00:15:52.478 verify_dump=1 00:15:52.478 verify_backlog=512 00:15:52.478 verify_state_save=0 00:15:52.478 do_verify=1 00:15:52.478 verify=crc32c-intel 00:15:52.478 [job0] 00:15:52.478 filename=/dev/nvme0n1 00:15:52.478 [job1] 00:15:52.478 filename=/dev/nvme0n2 00:15:52.478 [job2] 00:15:52.478 filename=/dev/nvme0n3 00:15:52.478 [job3] 00:15:52.478 filename=/dev/nvme0n4 00:15:52.478 Could not set queue depth (nvme0n1) 00:15:52.478 Could not set queue depth (nvme0n2) 00:15:52.478 Could not set queue depth (nvme0n3) 00:15:52.478 Could not set queue depth (nvme0n4) 00:15:52.737 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:52.737 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:52.737 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:52.737 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:52.737 fio-3.35 00:15:52.737 Starting 4 threads 00:15:54.112 00:15:54.112 job0: (groupid=0, jobs=1): err= 0: pid=1640264: Mon Jul 15 18:07:54 2024 00:15:54.112 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:15:54.112 slat (nsec): min=8211, max=30960, avg=8834.92, stdev=772.44 00:15:54.112 clat (usec): min=66, max=172, avg=86.08, stdev=13.28 00:15:54.112 lat (usec): min=75, max=180, avg=94.91, stdev=13.28 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:15:54.112 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:15:54.112 | 70.00th=[ 87], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 121], 00:15:54.112 | 99.00th=[ 131], 99.50th=[ 139], 99.90th=[ 155], 99.95th=[ 163], 00:15:54.112 | 99.99th=[ 172] 00:15:54.112 write: IOPS=5297, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1001msec); 0 zone resets 00:15:54.112 slat (nsec): min=7801, max=68230, avg=10391.11, stdev=1295.14 00:15:54.112 clat (usec): min=63, max=250, avg=82.91, stdev=12.29 00:15:54.112 lat (usec): min=74, max=260, avg=93.31, stdev=12.41 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:15:54.112 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 82], 00:15:54.112 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 113], 00:15:54.112 | 99.00th=[ 133], 99.50th=[ 141], 99.90th=[ 151], 99.95th=[ 155], 00:15:54.112 | 99.99th=[ 251] 00:15:54.112 bw ( KiB/s): min=20480, max=20480, per=32.20%, avg=20480.00, stdev= 0.00, samples=1 00:15:54.112 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:54.112 lat (usec) : 100=90.42%, 250=9.57%, 500=0.01% 00:15:54.112 cpu : usr=6.30%, sys=14.50%, ctx=10424, majf=0, minf=1 00:15:54.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 issued rwts: total=5120,5303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.112 job1: (groupid=0, jobs=1): err= 0: pid=1640265: Mon Jul 15 18:07:54 2024 00:15:54.112 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:15:54.112 slat (nsec): min=8240, max=40238, avg=10619.23, stdev=2756.10 00:15:54.112 clat (usec): min=69, max=221, avg=143.85, stdev=17.66 00:15:54.112 lat (usec): min=77, max=230, avg=154.47, stdev=18.23 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 94], 5.00th=[ 118], 10.00th=[ 122], 20.00th=[ 133], 00:15:54.112 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:15:54.112 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 174], 00:15:54.112 | 99.00th=[ 200], 99.50th=[ 202], 99.90th=[ 212], 99.95th=[ 217], 00:15:54.112 | 99.99th=[ 221] 00:15:54.112 write: IOPS=3472, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1001msec); 0 zone resets 00:15:54.112 slat (nsec): min=9848, max=41533, avg=12477.26, stdev=3059.27 00:15:54.112 clat (usec): min=66, max=213, avg=134.40, stdev=17.17 00:15:54.112 lat (usec): min=77, max=224, avg=146.87, stdev=17.66 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 80], 5.00th=[ 106], 10.00th=[ 113], 20.00th=[ 124], 00:15:54.112 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:15:54.112 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 159], 00:15:54.112 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 200], 00:15:54.112 | 99.99th=[ 215] 00:15:54.112 bw ( KiB/s): min=14776, max=14776, per=23.23%, avg=14776.00, stdev= 0.00, samples=1 00:15:54.112 iops : min= 3694, max= 3694, avg=3694.00, stdev= 0.00, samples=1 00:15:54.112 lat (usec) : 100=1.74%, 250=98.26% 00:15:54.112 cpu : usr=3.80%, sys=10.40%, ctx=6548, majf=0, minf=1 00:15:54.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 issued rwts: total=3072,3476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.112 job2: (groupid=0, jobs=1): err= 0: pid=1640266: Mon Jul 15 18:07:54 2024 00:15:54.112 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:15:54.112 slat (nsec): min=8505, max=21003, avg=9107.57, stdev=920.52 00:15:54.112 clat (usec): min=73, max=202, avg=142.25, stdev=15.97 00:15:54.112 lat (usec): min=82, max=211, avg=151.36, stdev=15.95 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 87], 5.00th=[ 119], 10.00th=[ 124], 20.00th=[ 133], 00:15:54.112 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:15:54.112 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:15:54.112 | 99.00th=[ 180], 99.50th=[ 182], 99.90th=[ 196], 99.95th=[ 198], 00:15:54.112 | 99.99th=[ 202] 00:15:54.112 write: IOPS=3563, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec); 0 zone resets 00:15:54.112 slat (nsec): min=7906, max=36995, avg=11007.50, stdev=1150.44 00:15:54.112 clat (usec): min=72, max=193, avg=134.74, stdev=14.89 00:15:54.112 lat (usec): min=83, max=204, avg=145.75, stdev=14.97 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 82], 5.00th=[ 111], 10.00th=[ 117], 20.00th=[ 127], 00:15:54.112 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:15:54.112 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:15:54.112 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 194], 00:15:54.112 | 99.99th=[ 194] 00:15:54.112 bw ( KiB/s): min=14728, max=14728, per=23.16%, avg=14728.00, stdev= 0.00, samples=1 00:15:54.112 iops : min= 3682, max= 3682, avg=3682.00, stdev= 0.00, samples=1 00:15:54.112 lat (usec) : 100=3.01%, 250=96.99% 00:15:54.112 cpu : usr=3.90%, sys=10.20%, ctx=6639, majf=0, minf=1 00:15:54.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 issued rwts: total=3072,3567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.112 job3: (groupid=0, jobs=1): err= 0: pid=1640267: Mon Jul 15 18:07:54 2024 00:15:54.112 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:15:54.112 slat (nsec): min=8284, max=20265, avg=8959.25, stdev=755.46 00:15:54.112 clat (usec): min=74, max=205, avg=142.33, stdev=16.03 00:15:54.112 lat (usec): min=83, max=214, avg=151.28, stdev=16.01 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 86], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 133], 00:15:54.112 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:15:54.112 | 70.00th=[ 151], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 163], 00:15:54.112 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 196], 00:15:54.112 | 99.99th=[ 206] 00:15:54.112 write: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec); 0 zone resets 00:15:54.112 slat (nsec): min=10108, max=38262, avg=11142.18, stdev=1270.62 00:15:54.112 clat (usec): min=70, max=198, avg=134.55, stdev=14.73 00:15:54.112 lat (usec): min=82, max=211, avg=145.69, stdev=14.75 00:15:54.112 clat percentiles (usec): 00:15:54.112 | 1.00th=[ 82], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 127], 00:15:54.112 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:15:54.112 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 149], 95.00th=[ 155], 00:15:54.112 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 188], 99.95th=[ 190], 00:15:54.112 | 99.99th=[ 198] 00:15:54.112 bw ( KiB/s): min=14744, max=14744, per=23.18%, avg=14744.00, stdev= 0.00, samples=1 00:15:54.112 iops : min= 3686, max= 3686, avg=3686.00, stdev= 0.00, samples=1 00:15:54.112 lat (usec) : 100=2.94%, 250=97.06% 00:15:54.112 cpu : usr=5.20%, sys=8.90%, ctx=6641, majf=0, minf=2 00:15:54.112 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.112 issued rwts: total=3072,3569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.112 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.112 00:15:54.112 Run status group 0 (all jobs): 00:15:54.112 READ: bw=55.9MiB/s (58.7MB/s), 12.0MiB/s-20.0MiB/s (12.6MB/s-20.9MB/s), io=56.0MiB (58.7MB), run=1001-1001msec 00:15:54.112 WRITE: bw=62.1MiB/s (65.1MB/s), 13.6MiB/s-20.7MiB/s (14.2MB/s-21.7MB/s), io=62.2MiB (65.2MB), run=1001-1001msec 00:15:54.112 00:15:54.112 Disk stats (read/write): 00:15:54.113 nvme0n1: ios=4145/4565, merge=0/0, ticks=330/345, in_queue=675, util=84.75% 00:15:54.113 nvme0n2: ios=2560/2957, merge=0/0, ticks=335/370, in_queue=705, util=85.51% 00:15:54.113 nvme0n3: ios=2560/3013, merge=0/0, ticks=337/379, in_queue=716, util=88.49% 00:15:54.113 nvme0n4: ios=2560/3015, merge=0/0, ticks=335/374, in_queue=709, util=89.53% 00:15:54.113 18:07:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:54.113 [global] 00:15:54.113 thread=1 00:15:54.113 invalidate=1 00:15:54.113 rw=write 00:15:54.113 time_based=1 00:15:54.113 runtime=1 00:15:54.113 ioengine=libaio 00:15:54.113 direct=1 00:15:54.113 bs=4096 00:15:54.113 iodepth=128 00:15:54.113 norandommap=0 00:15:54.113 numjobs=1 00:15:54.113 00:15:54.113 verify_dump=1 00:15:54.113 verify_backlog=512 00:15:54.113 verify_state_save=0 00:15:54.113 do_verify=1 00:15:54.113 verify=crc32c-intel 00:15:54.113 [job0] 00:15:54.113 filename=/dev/nvme0n1 00:15:54.113 [job1] 00:15:54.113 filename=/dev/nvme0n2 00:15:54.113 [job2] 00:15:54.113 filename=/dev/nvme0n3 00:15:54.113 [job3] 00:15:54.113 filename=/dev/nvme0n4 00:15:54.113 Could not set queue depth (nvme0n1) 00:15:54.113 Could not set queue depth (nvme0n2) 00:15:54.113 Could not set queue depth (nvme0n3) 00:15:54.113 Could not set queue depth (nvme0n4) 00:15:54.371 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:54.371 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:54.371 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:54.371 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:54.371 fio-3.35 00:15:54.371 Starting 4 threads 00:15:55.747 00:15:55.748 job0: (groupid=0, jobs=1): err= 0: pid=1640689: Mon Jul 15 18:07:55 2024 00:15:55.748 read: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(41.8MiB/1002msec) 00:15:55.748 slat (usec): min=2, max=1132, avg=46.29, stdev=172.29 00:15:55.748 clat (usec): min=1299, max=7941, avg=6076.29, stdev=699.26 00:15:55.748 lat (usec): min=1895, max=7943, avg=6122.58, stdev=684.28 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[ 4686], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5407], 00:15:55.748 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 6325], 60.00th=[ 6521], 00:15:55.748 | 70.00th=[ 6652], 80.00th=[ 6718], 90.00th=[ 6849], 95.00th=[ 6915], 00:15:55.748 | 99.00th=[ 7046], 99.50th=[ 7046], 99.90th=[ 7177], 99.95th=[ 7898], 00:15:55.748 | 99.99th=[ 7963] 00:15:55.748 write: IOPS=10.7k, BW=41.9MiB/s (44.0MB/s)(42.0MiB/1002msec); 0 zone resets 00:15:55.748 slat (usec): min=2, max=994, avg=44.02, stdev=162.08 00:15:55.748 clat (usec): min=4101, max=6846, avg=5766.66, stdev=612.35 00:15:55.748 lat (usec): min=4200, max=6851, avg=5810.67, stdev=596.57 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[ 4555], 5.00th=[ 4948], 10.00th=[ 5014], 20.00th=[ 5145], 00:15:55.748 | 30.00th=[ 5211], 40.00th=[ 5407], 50.00th=[ 5932], 60.00th=[ 6194], 00:15:55.748 | 70.00th=[ 6325], 80.00th=[ 6390], 90.00th=[ 6456], 95.00th=[ 6587], 00:15:55.748 | 99.00th=[ 6652], 99.50th=[ 6718], 99.90th=[ 6849], 99.95th=[ 6849], 00:15:55.748 | 99.99th=[ 6849] 00:15:55.748 bw ( KiB/s): min=39344, max=46672, per=41.94%, avg=43008.00, stdev=5181.68, samples=2 00:15:55.748 iops : min= 9836, max=11668, avg=10752.00, stdev=1295.42, samples=2 00:15:55.748 lat (msec) : 2=0.08%, 4=0.18%, 10=99.74% 00:15:55.748 cpu : usr=3.40%, sys=7.99%, ctx=1363, majf=0, minf=1 00:15:55.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:55.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.748 issued rwts: total=10688,10752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.748 job1: (groupid=0, jobs=1): err= 0: pid=1640690: Mon Jul 15 18:07:55 2024 00:15:55.748 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:15:55.748 slat (usec): min=2, max=1165, avg=122.56, stdev=287.86 00:15:55.748 clat (usec): min=7794, max=20250, avg=15635.57, stdev=1925.05 00:15:55.748 lat (usec): min=7796, max=20253, avg=15758.13, stdev=1918.68 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[13042], 5.00th=[13698], 10.00th=[13960], 20.00th=[14222], 00:15:55.748 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:15:55.748 | 70.00th=[16909], 80.00th=[18220], 90.00th=[18744], 95.00th=[18744], 00:15:55.748 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:15:55.748 | 99.99th=[20317] 00:15:55.748 write: IOPS=4140, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec); 0 zone resets 00:15:55.748 slat (usec): min=2, max=1360, avg=116.84, stdev=273.99 00:15:55.748 clat (usec): min=1537, max=18770, avg=15055.96, stdev=2124.10 00:15:55.748 lat (usec): min=2397, max=18791, avg=15172.79, stdev=2118.33 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[ 6783], 5.00th=[13042], 10.00th=[13566], 20.00th=[13829], 00:15:55.748 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14484], 00:15:55.748 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:15:55.748 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18482], 99.95th=[18744], 00:15:55.748 | 99.99th=[18744] 00:15:55.748 bw ( KiB/s): min=16384, max=16384, per=15.98%, avg=16384.00, stdev= 0.00, samples=2 00:15:55.748 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:15:55.748 lat (msec) : 2=0.01%, 4=0.21%, 10=0.67%, 20=99.10%, 50=0.01% 00:15:55.748 cpu : usr=2.20%, sys=2.99%, ctx=1983, majf=0, minf=1 00:15:55.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:55.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.748 issued rwts: total=4096,4153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.748 job2: (groupid=0, jobs=1): err= 0: pid=1640691: Mon Jul 15 18:07:55 2024 00:15:55.748 read: IOPS=6212, BW=24.3MiB/s (25.4MB/s)(24.3MiB/1002msec) 00:15:55.748 slat (usec): min=2, max=1403, avg=77.56, stdev=239.65 00:15:55.748 clat (usec): min=599, max=19294, avg=9816.38, stdev=4132.40 00:15:55.748 lat (usec): min=1414, max=19519, avg=9893.93, stdev=4166.12 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[ 4752], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7635], 00:15:55.748 | 30.00th=[ 7767], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8029], 00:15:55.748 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[18482], 95.00th=[18744], 00:15:55.748 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:15:55.748 | 99.99th=[19268] 00:15:55.748 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:15:55.748 slat (usec): min=2, max=1382, avg=74.97, stdev=226.16 00:15:55.748 clat (usec): min=6709, max=18792, avg=9811.44, stdev=4175.28 00:15:55.748 lat (usec): min=6712, max=18796, avg=9886.41, stdev=4208.01 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[ 6980], 5.00th=[ 7046], 10.00th=[ 7111], 20.00th=[ 7242], 00:15:55.748 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:15:55.748 | 70.00th=[ 8094], 80.00th=[16909], 90.00th=[17433], 95.00th=[17695], 00:15:55.748 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:15:55.748 | 99.99th=[18744] 00:15:55.748 bw ( KiB/s): min=19960, max=32920, per=25.79%, avg=26440.00, stdev=9164.10, samples=2 00:15:55.748 iops : min= 4990, max= 8230, avg=6610.00, stdev=2291.03, samples=2 00:15:55.748 lat (usec) : 750=0.01% 00:15:55.748 lat (msec) : 2=0.12%, 4=0.25%, 10=78.20%, 20=21.43% 00:15:55.748 cpu : usr=2.90%, sys=4.30%, ctx=1506, majf=0, minf=1 00:15:55.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:55.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.748 issued rwts: total=6225,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.748 job3: (groupid=0, jobs=1): err= 0: pid=1640692: Mon Jul 15 18:07:55 2024 00:15:55.748 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:15:55.748 slat (usec): min=2, max=1162, avg=122.45, stdev=286.87 00:15:55.748 clat (usec): min=8736, max=19481, avg=15630.07, stdev=1912.79 00:15:55.748 lat (usec): min=8738, max=19677, avg=15752.51, stdev=1907.71 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[12518], 5.00th=[13698], 10.00th=[14091], 20.00th=[14222], 00:15:55.748 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:15:55.748 | 70.00th=[16712], 80.00th=[18220], 90.00th=[18744], 95.00th=[18744], 00:15:55.748 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:15:55.748 | 99.99th=[19530] 00:15:55.748 write: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1003msec); 0 zone resets 00:15:55.748 slat (usec): min=2, max=1359, avg=117.07, stdev=274.15 00:15:55.748 clat (usec): min=1535, max=18769, avg=15068.67, stdev=2103.77 00:15:55.748 lat (usec): min=2332, max=19083, avg=15185.73, stdev=2096.32 00:15:55.748 clat percentiles (usec): 00:15:55.748 | 1.00th=[ 6718], 5.00th=[13173], 10.00th=[13566], 20.00th=[13829], 00:15:55.748 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14484], 00:15:55.748 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:15:55.748 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:15:55.748 | 99.99th=[18744] 00:15:55.748 bw ( KiB/s): min=16384, max=16384, per=15.98%, avg=16384.00, stdev= 0.00, samples=2 00:15:55.748 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:15:55.748 lat (msec) : 2=0.01%, 4=0.17%, 10=0.68%, 20=99.14% 00:15:55.748 cpu : usr=1.40%, sys=3.79%, ctx=1949, majf=0, minf=1 00:15:55.748 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:55.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.748 issued rwts: total=4096,4150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.748 00:15:55.748 Run status group 0 (all jobs): 00:15:55.748 READ: bw=97.8MiB/s (103MB/s), 16.0MiB/s-41.7MiB/s (16.7MB/s-43.7MB/s), io=98.1MiB (103MB), run=1002-1003msec 00:15:55.748 WRITE: bw=100MiB/s (105MB/s), 16.2MiB/s-41.9MiB/s (16.9MB/s-44.0MB/s), io=100MiB (105MB), run=1002-1003msec 00:15:55.748 00:15:55.748 Disk stats (read/write): 00:15:55.748 nvme0n1: ios=9014/9216, merge=0/0, ticks=17438/16763, in_queue=34201, util=84.75% 00:15:55.748 nvme0n2: ios=3184/3584, merge=0/0, ticks=12826/13682, in_queue=26508, util=85.60% 00:15:55.748 nvme0n3: ios=5034/5120, merge=0/0, ticks=13212/13339, in_queue=26551, util=88.58% 00:15:55.748 nvme0n4: ios=3182/3584, merge=0/0, ticks=12813/13725, in_queue=26538, util=89.53% 00:15:55.748 18:07:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:55.748 [global] 00:15:55.748 thread=1 00:15:55.748 invalidate=1 00:15:55.748 rw=randwrite 00:15:55.748 time_based=1 00:15:55.748 runtime=1 00:15:55.748 ioengine=libaio 00:15:55.748 direct=1 00:15:55.748 bs=4096 00:15:55.748 iodepth=128 00:15:55.748 norandommap=0 00:15:55.748 numjobs=1 00:15:55.748 00:15:55.748 verify_dump=1 00:15:55.748 verify_backlog=512 00:15:55.748 verify_state_save=0 00:15:55.748 do_verify=1 00:15:55.748 verify=crc32c-intel 00:15:55.748 [job0] 00:15:55.748 filename=/dev/nvme0n1 00:15:55.748 [job1] 00:15:55.748 filename=/dev/nvme0n2 00:15:55.748 [job2] 00:15:55.749 filename=/dev/nvme0n3 00:15:55.749 [job3] 00:15:55.749 filename=/dev/nvme0n4 00:15:55.749 Could not set queue depth (nvme0n1) 00:15:55.749 Could not set queue depth (nvme0n2) 00:15:55.749 Could not set queue depth (nvme0n3) 00:15:55.749 Could not set queue depth (nvme0n4) 00:15:56.007 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:56.007 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:56.007 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:56.007 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:56.007 fio-3.35 00:15:56.007 Starting 4 threads 00:15:57.384 00:15:57.384 job0: (groupid=0, jobs=1): err= 0: pid=1641113: Mon Jul 15 18:07:57 2024 00:15:57.384 read: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1003msec) 00:15:57.384 slat (usec): min=2, max=2753, avg=142.43, stdev=338.83 00:15:57.384 clat (usec): min=2899, max=21878, avg=18126.19, stdev=1760.22 00:15:57.384 lat (usec): min=3648, max=21882, avg=18268.62, stdev=1747.73 00:15:57.384 clat percentiles (usec): 00:15:57.384 | 1.00th=[ 7767], 5.00th=[16319], 10.00th=[16909], 20.00th=[17433], 00:15:57.384 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:15:57.384 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 00:15:57.384 | 99.00th=[20579], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:15:57.384 | 99.99th=[21890] 00:15:57.384 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:15:57.384 slat (usec): min=2, max=2594, avg=134.82, stdev=322.22 00:15:57.384 clat (usec): min=11780, max=20702, avg=17472.61, stdev=784.82 00:15:57.384 lat (usec): min=11791, max=20730, avg=17607.43, stdev=761.36 00:15:57.384 clat percentiles (usec): 00:15:57.384 | 1.00th=[15139], 5.00th=[16057], 10.00th=[16319], 20.00th=[16909], 00:15:57.384 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17695], 00:15:57.384 | 70.00th=[17957], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:15:57.384 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[20055], 00:15:57.384 | 99.99th=[20579] 00:15:57.384 bw ( KiB/s): min=12848, max=15824, per=17.05%, avg=14336.00, stdev=2104.35, samples=2 00:15:57.384 iops : min= 3212, max= 3956, avg=3584.00, stdev=526.09, samples=2 00:15:57.385 lat (msec) : 4=0.13%, 10=0.56%, 20=98.32%, 50=1.00% 00:15:57.385 cpu : usr=2.00%, sys=2.89%, ctx=1877, majf=0, minf=1 00:15:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.385 issued rwts: total=3539,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.385 job1: (groupid=0, jobs=1): err= 0: pid=1641114: Mon Jul 15 18:07:57 2024 00:15:57.385 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:15:57.385 slat (usec): min=2, max=3533, avg=96.30, stdev=379.31 00:15:57.385 clat (usec): min=4516, max=18160, avg=12578.84, stdev=3400.94 00:15:57.385 lat (usec): min=5098, max=18167, avg=12675.14, stdev=3438.61 00:15:57.385 clat percentiles (usec): 00:15:57.385 | 1.00th=[ 5145], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[11994], 00:15:57.385 | 30.00th=[12780], 40.00th=[13698], 50.00th=[13960], 60.00th=[14353], 00:15:57.385 | 70.00th=[14484], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:15:57.385 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:15:57.385 | 99.99th=[18220] 00:15:57.385 write: IOPS=5128, BW=20.0MiB/s (21.0MB/s)(20.1MiB/1003msec); 0 zone resets 00:15:57.385 slat (usec): min=2, max=4558, avg=94.72, stdev=383.61 00:15:57.385 clat (usec): min=2710, max=19331, avg=12190.09, stdev=3659.29 00:15:57.385 lat (usec): min=2719, max=19345, avg=12284.81, stdev=3699.96 00:15:57.385 clat percentiles (usec): 00:15:57.385 | 1.00th=[ 4621], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5735], 00:15:57.385 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:15:57.385 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14877], 95.00th=[15533], 00:15:57.385 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17695], 99.95th=[18482], 00:15:57.385 | 99.99th=[19268] 00:15:57.385 bw ( KiB/s): min=16384, max=24576, per=24.35%, avg=20480.00, stdev=5792.62, samples=2 00:15:57.385 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:15:57.385 lat (msec) : 4=0.02%, 10=19.19%, 20=80.79% 00:15:57.385 cpu : usr=2.00%, sys=4.69%, ctx=971, majf=0, minf=1 00:15:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.385 issued rwts: total=5120,5144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.385 job2: (groupid=0, jobs=1): err= 0: pid=1641115: Mon Jul 15 18:07:57 2024 00:15:57.385 read: IOPS=3533, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1003msec) 00:15:57.385 slat (usec): min=2, max=2710, avg=141.24, stdev=316.83 00:15:57.385 clat (usec): min=2319, max=21251, avg=18015.08, stdev=1875.41 00:15:57.385 lat (usec): min=2954, max=21295, avg=18156.32, stdev=1866.76 00:15:57.385 clat percentiles (usec): 00:15:57.385 | 1.00th=[ 7832], 5.00th=[16057], 10.00th=[16581], 20.00th=[17433], 00:15:57.385 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18482], 00:15:57.385 | 70.00th=[18744], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 00:15:57.385 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:15:57.385 | 99.99th=[21365] 00:15:57.385 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:15:57.385 slat (usec): min=2, max=2636, avg=135.88, stdev=318.72 00:15:57.385 clat (usec): min=13286, max=20741, avg=17509.14, stdev=806.01 00:15:57.385 lat (usec): min=13325, max=20826, avg=17645.03, stdev=783.49 00:15:57.385 clat percentiles (usec): 00:15:57.385 | 1.00th=[15533], 5.00th=[16057], 10.00th=[16188], 20.00th=[16909], 00:15:57.385 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:15:57.385 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:15:57.385 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20055], 99.95th=[20317], 00:15:57.385 | 99.99th=[20841] 00:15:57.385 bw ( KiB/s): min=13000, max=15672, per=17.05%, avg=14336.00, stdev=1889.39, samples=2 00:15:57.385 iops : min= 3250, max= 3918, avg=3584.00, stdev=472.35, samples=2 00:15:57.385 lat (msec) : 4=0.22%, 10=0.56%, 20=98.37%, 50=0.84% 00:15:57.385 cpu : usr=1.60%, sys=3.29%, ctx=1843, majf=0, minf=1 00:15:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.385 issued rwts: total=3544,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.385 job3: (groupid=0, jobs=1): err= 0: pid=1641117: Mon Jul 15 18:07:57 2024 00:15:57.385 read: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(34.0MiB/1001msec) 00:15:57.385 slat (usec): min=2, max=1968, avg=55.32, stdev=189.91 00:15:57.385 clat (usec): min=4394, max=18443, avg=7121.03, stdev=2497.56 00:15:57.385 lat (usec): min=4402, max=18454, avg=7176.35, stdev=2516.70 00:15:57.385 clat percentiles (usec): 00:15:57.385 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 5997], 20.00th=[ 6128], 00:15:57.385 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6718], 00:15:57.385 | 70.00th=[ 6849], 80.00th=[ 6915], 90.00th=[ 7046], 95.00th=[16188], 00:15:57.385 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17957], 99.95th=[17957], 00:15:57.385 | 99.99th=[18482] 00:15:57.385 write: IOPS=8768, BW=34.2MiB/s (35.9MB/s)(34.3MiB/1001msec); 0 zone resets 00:15:57.385 slat (usec): min=2, max=2046, avg=56.30, stdev=194.21 00:15:57.385 clat (usec): min=351, max=18617, avg=7333.56, stdev=3476.39 00:15:57.385 lat (usec): min=967, max=18621, avg=7389.86, stdev=3499.63 00:15:57.385 clat percentiles (usec): 00:15:57.385 | 1.00th=[ 5407], 5.00th=[ 5538], 10.00th=[ 5604], 20.00th=[ 5800], 00:15:57.385 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6325], 00:15:57.385 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[16057], 95.00th=[16909], 00:15:57.385 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18744], 00:15:57.385 | 99.99th=[18744] 00:15:57.385 bw ( KiB/s): min=29512, max=29512, per=35.09%, avg=29512.00, stdev= 0.00, samples=1 00:15:57.385 iops : min= 7378, max= 7378, avg=7378.00, stdev= 0.00, samples=1 00:15:57.385 lat (usec) : 500=0.01%, 1000=0.05% 00:15:57.385 lat (msec) : 2=0.10%, 4=0.26%, 10=90.40%, 20=9.19% 00:15:57.385 cpu : usr=3.20%, sys=5.90%, ctx=1558, majf=0, minf=1 00:15:57.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:57.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.385 issued rwts: total=8704,8777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.385 00:15:57.385 Run status group 0 (all jobs): 00:15:57.385 READ: bw=81.4MiB/s (85.4MB/s), 13.8MiB/s-34.0MiB/s (14.5MB/s-35.6MB/s), io=81.7MiB (85.6MB), run=1001-1003msec 00:15:57.385 WRITE: bw=82.1MiB/s (86.1MB/s), 14.0MiB/s-34.2MiB/s (14.6MB/s-35.9MB/s), io=82.4MiB (86.4MB), run=1001-1003msec 00:15:57.385 00:15:57.385 Disk stats (read/write): 00:15:57.385 nvme0n1: ios=2890/3072, merge=0/0, ticks=13005/13287, in_queue=26292, util=84.17% 00:15:57.385 nvme0n2: ios=4127/4608, merge=0/0, ticks=16266/17878, in_queue=34144, util=85.23% 00:15:57.385 nvme0n3: ios=2840/3072, merge=0/0, ticks=12925/13382, in_queue=26307, util=88.31% 00:15:57.385 nvme0n4: ios=6860/7168, merge=0/0, ticks=12726/13501, in_queue=26227, util=89.35% 00:15:57.385 18:07:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:57.385 18:07:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1641386 00:15:57.385 18:07:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:57.385 18:07:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:57.385 [global] 00:15:57.385 thread=1 00:15:57.385 invalidate=1 00:15:57.385 rw=read 00:15:57.385 time_based=1 00:15:57.385 runtime=10 00:15:57.385 ioengine=libaio 00:15:57.385 direct=1 00:15:57.385 bs=4096 00:15:57.385 iodepth=1 00:15:57.385 norandommap=1 00:15:57.385 numjobs=1 00:15:57.385 00:15:57.385 [job0] 00:15:57.385 filename=/dev/nvme0n1 00:15:57.385 [job1] 00:15:57.385 filename=/dev/nvme0n2 00:15:57.385 [job2] 00:15:57.385 filename=/dev/nvme0n3 00:15:57.385 [job3] 00:15:57.385 filename=/dev/nvme0n4 00:15:57.385 Could not set queue depth (nvme0n1) 00:15:57.385 Could not set queue depth (nvme0n2) 00:15:57.385 Could not set queue depth (nvme0n3) 00:15:57.386 Could not set queue depth (nvme0n4) 00:15:57.659 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.659 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.659 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.659 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.659 fio-3.35 00:15:57.659 Starting 4 threads 00:16:00.189 18:08:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:00.447 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=76320768, buflen=4096 00:16:00.447 fio: pid=1641544, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:00.447 18:08:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:00.447 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=72749056, buflen=4096 00:16:00.447 fio: pid=1641543, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:00.705 18:08:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:00.705 18:08:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:00.705 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=43003904, buflen=4096 00:16:00.705 fio: pid=1641541, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:00.705 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:00.705 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:00.965 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=22745088, buflen=4096 00:16:00.965 fio: pid=1641542, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:00.965 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:00.965 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:00.965 00:16:00.965 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1641541: Mon Jul 15 18:08:01 2024 00:16:00.965 read: IOPS=9103, BW=35.6MiB/s (37.3MB/s)(105MiB/2953msec) 00:16:00.965 slat (usec): min=8, max=12939, avg=10.38, stdev=129.51 00:16:00.965 clat (usec): min=51, max=659, avg=98.15, stdev=31.58 00:16:00.965 lat (usec): min=60, max=13014, avg=108.52, stdev=133.42 00:16:00.965 clat percentiles (usec): 00:16:00.965 | 1.00th=[ 62], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:16:00.965 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 84], 00:16:00.965 | 70.00th=[ 113], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:16:00.965 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 204], 00:16:00.965 | 99.99th=[ 494] 00:16:00.965 bw ( KiB/s): min=25472, max=45816, per=34.04%, avg=36787.20, stdev=9970.77, samples=5 00:16:00.965 iops : min= 6368, max=11454, avg=9196.80, stdev=2492.69, samples=5 00:16:00.965 lat (usec) : 100=67.20%, 250=32.77%, 500=0.01%, 750=0.01% 00:16:00.965 cpu : usr=3.42%, sys=13.28%, ctx=26888, majf=0, minf=1 00:16:00.965 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 issued rwts: total=26884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.965 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.965 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1641542: Mon Jul 15 18:08:01 2024 00:16:00.965 read: IOPS=6955, BW=27.2MiB/s (28.5MB/s)(85.7MiB/3154msec) 00:16:00.965 slat (usec): min=7, max=11876, avg=11.33, stdev=150.09 00:16:00.965 clat (usec): min=40, max=8791, avg=130.13, stdev=74.50 00:16:00.965 lat (usec): min=59, max=11986, avg=141.47, stdev=167.07 00:16:00.965 clat percentiles (usec): 00:16:00.965 | 1.00th=[ 56], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 83], 00:16:00.965 | 30.00th=[ 129], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 147], 00:16:00.965 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 165], 00:16:00.965 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 217], 99.95th=[ 245], 00:16:00.965 | 99.99th=[ 1012] 00:16:00.965 bw ( KiB/s): min=24552, max=38198, per=25.31%, avg=27355.67, stdev=5353.68, samples=6 00:16:00.965 iops : min= 6138, max= 9549, avg=6838.83, stdev=1338.22, samples=6 00:16:00.965 lat (usec) : 50=0.01%, 100=22.70%, 250=77.24%, 500=0.03%, 750=0.01% 00:16:00.965 lat (msec) : 2=0.01%, 10=0.01% 00:16:00.965 cpu : usr=2.79%, sys=10.34%, ctx=21945, majf=0, minf=1 00:16:00.965 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 issued rwts: total=21938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.965 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.965 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1641543: Mon Jul 15 18:08:01 2024 00:16:00.965 read: IOPS=6389, BW=25.0MiB/s (26.2MB/s)(69.4MiB/2780msec) 00:16:00.965 slat (usec): min=8, max=15899, avg=10.37, stdev=132.88 00:16:00.965 clat (usec): min=67, max=1035, avg=144.31, stdev=21.66 00:16:00.965 lat (usec): min=75, max=16015, avg=154.68, stdev=134.29 00:16:00.965 clat percentiles (usec): 00:16:00.965 | 1.00th=[ 81], 5.00th=[ 103], 10.00th=[ 116], 20.00th=[ 137], 00:16:00.965 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:16:00.965 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:16:00.965 | 99.00th=[ 198], 99.50th=[ 202], 99.90th=[ 210], 99.95th=[ 239], 00:16:00.965 | 99.99th=[ 578] 00:16:00.965 bw ( KiB/s): min=24560, max=26392, per=23.32%, avg=25204.80, stdev=730.78, samples=5 00:16:00.965 iops : min= 6140, max= 6598, avg=6301.20, stdev=182.69, samples=5 00:16:00.965 lat (usec) : 100=4.60%, 250=95.36%, 500=0.03%, 750=0.01% 00:16:00.965 lat (msec) : 2=0.01% 00:16:00.965 cpu : usr=3.38%, sys=8.92%, ctx=17765, majf=0, minf=1 00:16:00.965 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 issued rwts: total=17762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.965 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.965 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1641544: Mon Jul 15 18:08:01 2024 00:16:00.965 read: IOPS=7109, BW=27.8MiB/s (29.1MB/s)(72.8MiB/2621msec) 00:16:00.965 slat (nsec): min=8156, max=34750, avg=9102.33, stdev=1161.60 00:16:00.965 clat (usec): min=63, max=654, avg=128.69, stdev=32.48 00:16:00.965 lat (usec): min=72, max=663, avg=137.79, stdev=32.32 00:16:00.965 clat percentiles (usec): 00:16:00.965 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 87], 00:16:00.965 | 30.00th=[ 94], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 147], 00:16:00.965 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:16:00.965 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 212], 99.95th=[ 217], 00:16:00.965 | 99.99th=[ 478] 00:16:00.965 bw ( KiB/s): min=24568, max=40968, per=26.41%, avg=28544.00, stdev=6985.20, samples=5 00:16:00.965 iops : min= 6142, max=10242, avg=7136.00, stdev=1746.30, samples=5 00:16:00.965 lat (usec) : 100=32.10%, 250=67.86%, 500=0.03%, 750=0.01% 00:16:00.965 cpu : usr=3.17%, sys=10.00%, ctx=18634, majf=0, minf=2 00:16:00.965 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.965 issued rwts: total=18634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.965 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.965 00:16:00.965 Run status group 0 (all jobs): 00:16:00.965 READ: bw=106MiB/s (111MB/s), 25.0MiB/s-35.6MiB/s (26.2MB/s-37.3MB/s), io=333MiB (349MB), run=2621-3154msec 00:16:00.965 00:16:00.965 Disk stats (read/write): 00:16:00.965 nvme0n1: ios=25651/0, merge=0/0, ticks=2318/0, in_queue=2318, util=93.55% 00:16:00.965 nvme0n2: ios=21319/0, merge=0/0, ticks=2647/0, in_queue=2647, util=94.21% 00:16:00.965 nvme0n3: ios=16345/0, merge=0/0, ticks=2254/0, in_queue=2254, util=96.10% 00:16:00.965 nvme0n4: ios=18565/0, merge=0/0, ticks=2254/0, in_queue=2254, util=96.46% 00:16:01.224 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:01.224 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:01.483 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:01.483 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:01.483 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:01.483 18:08:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:01.742 18:08:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:01.742 18:08:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:02.000 18:08:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:02.000 18:08:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 1641386 00:16:02.000 18:08:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:02.000 18:08:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:02.935 nvmf hotplug test: fio failed as expected 00:16:02.935 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:03.217 rmmod nvme_rdma 00:16:03.217 rmmod nvme_fabrics 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1638295 ']' 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1638295 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1638295 ']' 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1638295 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1638295 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1638295' 00:16:03.217 killing process with pid 1638295 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1638295 00:16:03.217 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1638295 00:16:03.479 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.479 18:08:03 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:03.479 00:16:03.479 real 0m27.949s 00:16:03.479 user 2m7.032s 00:16:03.479 sys 0m11.164s 00:16:03.479 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:03.479 18:08:03 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.479 ************************************ 00:16:03.479 END TEST nvmf_fio_target 00:16:03.479 ************************************ 00:16:03.479 18:08:03 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:16:03.479 18:08:03 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:03.479 18:08:03 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:03.479 18:08:03 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:03.479 18:08:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:03.479 ************************************ 00:16:03.479 START TEST nvmf_bdevio 00:16:03.479 ************************************ 00:16:03.479 18:08:03 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:03.821 * Looking for test storage... 00:16:03.821 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:03.821 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.822 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.822 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.822 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:03.822 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:03.822 18:08:03 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:03.822 18:08:04 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:11.938 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:11.938 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:11.938 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:11.938 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:11.938 18:08:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:11.938 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:11.938 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:11.938 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:11.938 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:11.939 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:11.939 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:11.939 altname enp217s0f0np0 00:16:11.939 altname ens818f0np0 00:16:11.939 inet 192.168.100.8/24 scope global mlx_0_0 00:16:11.939 valid_lft forever preferred_lft forever 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:11.939 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:11.939 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:11.939 altname enp217s0f1np1 00:16:11.939 altname ens818f1np1 00:16:11.939 inet 192.168.100.9/24 scope global mlx_0_1 00:16:11.939 valid_lft forever preferred_lft forever 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:11.939 192.168.100.9' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:11.939 192.168.100.9' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:11.939 192.168.100.9' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1646544 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1646544 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1646544 ']' 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:11.939 18:08:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:11.939 [2024-07-15 18:08:12.268091] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:16:11.939 [2024-07-15 18:08:12.268146] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.939 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.199 [2024-07-15 18:08:12.355968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.199 [2024-07-15 18:08:12.428366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.199 [2024-07-15 18:08:12.428407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.199 [2024-07-15 18:08:12.428416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.199 [2024-07-15 18:08:12.428424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.199 [2024-07-15 18:08:12.428431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.199 [2024-07-15 18:08:12.428552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:12.199 [2024-07-15 18:08:12.428675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:12.199 [2024-07-15 18:08:12.428787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.199 [2024-07-15 18:08:12.428789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.767 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:12.767 [2024-07-15 18:08:13.157822] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x87e690/0x882b80) succeed. 00:16:12.767 [2024-07-15 18:08:13.167063] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x87fcd0/0x8c4210) succeed. 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:13.038 Malloc0 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:13.038 [2024-07-15 18:08:13.324265] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:13.038 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:13.038 { 00:16:13.038 "params": { 00:16:13.038 "name": "Nvme$subsystem", 00:16:13.038 "trtype": "$TEST_TRANSPORT", 00:16:13.038 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.038 "adrfam": "ipv4", 00:16:13.038 "trsvcid": "$NVMF_PORT", 00:16:13.038 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.038 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.038 "hdgst": ${hdgst:-false}, 00:16:13.038 "ddgst": ${ddgst:-false} 00:16:13.038 }, 00:16:13.038 "method": "bdev_nvme_attach_controller" 00:16:13.038 } 00:16:13.038 EOF 00:16:13.038 )") 00:16:13.039 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:13.039 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:13.039 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:13.039 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:13.039 "params": { 00:16:13.039 "name": "Nvme1", 00:16:13.039 "trtype": "rdma", 00:16:13.039 "traddr": "192.168.100.8", 00:16:13.039 "adrfam": "ipv4", 00:16:13.039 "trsvcid": "4420", 00:16:13.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.039 "hdgst": false, 00:16:13.039 "ddgst": false 00:16:13.039 }, 00:16:13.039 "method": "bdev_nvme_attach_controller" 00:16:13.039 }' 00:16:13.039 [2024-07-15 18:08:13.372815] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:16:13.039 [2024-07-15 18:08:13.372869] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1646829 ] 00:16:13.039 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.297 [2024-07-15 18:08:13.456775] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:13.297 [2024-07-15 18:08:13.529492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.297 [2024-07-15 18:08:13.529588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.297 [2024-07-15 18:08:13.529590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.556 I/O targets: 00:16:13.556 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:13.556 00:16:13.556 00:16:13.556 CUnit - A unit testing framework for C - Version 2.1-3 00:16:13.556 http://cunit.sourceforge.net/ 00:16:13.556 00:16:13.556 00:16:13.556 Suite: bdevio tests on: Nvme1n1 00:16:13.556 Test: blockdev write read block ...passed 00:16:13.556 Test: blockdev write zeroes read block ...passed 00:16:13.556 Test: blockdev write zeroes read no split ...passed 00:16:13.556 Test: blockdev write zeroes read split ...passed 00:16:13.556 Test: blockdev write zeroes read split partial ...passed 00:16:13.556 Test: blockdev reset ...[2024-07-15 18:08:13.733612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:13.556 [2024-07-15 18:08:13.756601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:13.556 [2024-07-15 18:08:13.783033] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:13.556 passed 00:16:13.556 Test: blockdev write read 8 blocks ...passed 00:16:13.556 Test: blockdev write read size > 128k ...passed 00:16:13.556 Test: blockdev write read invalid size ...passed 00:16:13.556 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:13.556 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:13.556 Test: blockdev write read max offset ...passed 00:16:13.556 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:13.556 Test: blockdev writev readv 8 blocks ...passed 00:16:13.556 Test: blockdev writev readv 30 x 1block ...passed 00:16:13.556 Test: blockdev writev readv block ...passed 00:16:13.556 Test: blockdev writev readv size > 128k ...passed 00:16:13.556 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:13.556 Test: blockdev comparev and writev ...[2024-07-15 18:08:13.785939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.785968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.785980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.785990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.786168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.786180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.786191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.786200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.786359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.786370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.786380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.786389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.786548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.786559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:13.556 [2024-07-15 18:08:13.786569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:13.556 [2024-07-15 18:08:13.786578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:13.556 passed 00:16:13.556 Test: blockdev nvme passthru rw ...passed 00:16:13.557 Test: blockdev nvme passthru vendor specific ...[2024-07-15 18:08:13.786843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:13.557 [2024-07-15 18:08:13.786855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:13.557 [2024-07-15 18:08:13.786899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:13.557 [2024-07-15 18:08:13.786910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:13.557 [2024-07-15 18:08:13.786947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:13.557 [2024-07-15 18:08:13.786957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:13.557 [2024-07-15 18:08:13.786994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:13.557 [2024-07-15 18:08:13.787004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:13.557 passed 00:16:13.557 Test: blockdev nvme admin passthru ...passed 00:16:13.557 Test: blockdev copy ...passed 00:16:13.557 00:16:13.557 Run Summary: Type Total Ran Passed Failed Inactive 00:16:13.557 suites 1 1 n/a 0 0 00:16:13.557 tests 23 23 23 0 0 00:16:13.557 asserts 152 152 152 0 n/a 00:16:13.557 00:16:13.557 Elapsed time = 0.171 seconds 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.816 18:08:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:13.816 rmmod nvme_rdma 00:16:13.816 rmmod nvme_fabrics 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1646544 ']' 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1646544 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1646544 ']' 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1646544 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1646544 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1646544' 00:16:13.816 killing process with pid 1646544 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1646544 00:16:13.816 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1646544 00:16:14.075 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.075 18:08:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:14.075 00:16:14.075 real 0m10.503s 00:16:14.075 user 0m11.082s 00:16:14.075 sys 0m6.856s 00:16:14.075 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.075 18:08:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:14.075 ************************************ 00:16:14.075 END TEST nvmf_bdevio 00:16:14.075 ************************************ 00:16:14.075 18:08:14 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:16:14.075 18:08:14 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:14.075 18:08:14 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.075 18:08:14 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.075 18:08:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:14.075 ************************************ 00:16:14.075 START TEST nvmf_auth_target 00:16:14.075 ************************************ 00:16:14.075 18:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:14.334 * Looking for test storage... 00:16:14.334 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:14.334 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.334 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.335 18:08:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:22.458 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:22.458 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:22.458 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.458 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:22.459 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:22.459 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:22.459 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:22.459 altname enp217s0f0np0 00:16:22.459 altname ens818f0np0 00:16:22.459 inet 192.168.100.8/24 scope global mlx_0_0 00:16:22.459 valid_lft forever preferred_lft forever 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:22.459 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:22.459 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:22.459 altname enp217s0f1np1 00:16:22.459 altname ens818f1np1 00:16:22.459 inet 192.168.100.9/24 scope global mlx_0_1 00:16:22.459 valid_lft forever preferred_lft forever 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:22.459 192.168.100.9' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:22.459 192.168.100.9' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:22.459 192.168.100.9' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1650998 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1650998 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1650998 ']' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.459 18:08:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1651032 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=da02ea5428e8df49e6bb0bb9221931b8e5ca602bcfdafb96 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.XaX 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key da02ea5428e8df49e6bb0bb9221931b8e5ca602bcfdafb96 0 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 da02ea5428e8df49e6bb0bb9221931b8e5ca602bcfdafb96 0 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=da02ea5428e8df49e6bb0bb9221931b8e5ca602bcfdafb96 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.XaX 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.XaX 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.XaX 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9be1cbfb5db4abccf95355b46e9d0451814f256188966e1acf851c2bf8a2cbf6 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Z6M 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9be1cbfb5db4abccf95355b46e9d0451814f256188966e1acf851c2bf8a2cbf6 3 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9be1cbfb5db4abccf95355b46e9d0451814f256188966e1acf851c2bf8a2cbf6 3 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9be1cbfb5db4abccf95355b46e9d0451814f256188966e1acf851c2bf8a2cbf6 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Z6M 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Z6M 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Z6M 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.397 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c04f8748129f1766088e92ab0b252ca5 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TMP 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c04f8748129f1766088e92ab0b252ca5 1 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c04f8748129f1766088e92ab0b252ca5 1 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c04f8748129f1766088e92ab0b252ca5 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TMP 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TMP 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.TMP 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=22d6db96509d7357be91272971660c61d53118db0bb02d8f 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xY8 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 22d6db96509d7357be91272971660c61d53118db0bb02d8f 2 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 22d6db96509d7357be91272971660c61d53118db0bb02d8f 2 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=22d6db96509d7357be91272971660c61d53118db0bb02d8f 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:23.398 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xY8 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xY8 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.xY8 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4219b167258ac5416aa310e8686297591c141154810d060b 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RjY 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4219b167258ac5416aa310e8686297591c141154810d060b 2 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4219b167258ac5416aa310e8686297591c141154810d060b 2 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4219b167258ac5416aa310e8686297591c141154810d060b 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RjY 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RjY 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.RjY 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=886364cb4f58830ecdcd9e5a60e75799 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wpo 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 886364cb4f58830ecdcd9e5a60e75799 1 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 886364cb4f58830ecdcd9e5a60e75799 1 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=886364cb4f58830ecdcd9e5a60e75799 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wpo 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wpo 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.wpo 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6067b3f105223a289c46364c6ecb7bfcd8ae2b16fddf9cdca9736d0b054e51c4 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zi8 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6067b3f105223a289c46364c6ecb7bfcd8ae2b16fddf9cdca9736d0b054e51c4 3 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6067b3f105223a289c46364c6ecb7bfcd8ae2b16fddf9cdca9736d0b054e51c4 3 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6067b3f105223a289c46364c6ecb7bfcd8ae2b16fddf9cdca9736d0b054e51c4 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:23.677 18:08:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zi8 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zi8 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.zi8 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1650998 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1650998 ']' 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.677 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1651032 /var/tmp/host.sock 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1651032 ']' 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.936 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XaX 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XaX 00:16:24.194 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XaX 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Z6M ]] 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z6M 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z6M 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Z6M 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.TMP 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.TMP 00:16:24.452 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.TMP 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.xY8 ]] 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xY8 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xY8 00:16:24.711 18:08:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.xY8 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RjY 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.RjY 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.RjY 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.wpo ]] 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wpo 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wpo 00:16:24.969 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wpo 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zi8 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zi8 00:16:25.228 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zi8 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.486 18:08:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.744 00:16:25.744 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.744 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.744 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.002 { 00:16:26.002 "cntlid": 1, 00:16:26.002 "qid": 0, 00:16:26.002 "state": "enabled", 00:16:26.002 "thread": "nvmf_tgt_poll_group_000", 00:16:26.002 "listen_address": { 00:16:26.002 "trtype": "RDMA", 00:16:26.002 "adrfam": "IPv4", 00:16:26.002 "traddr": "192.168.100.8", 00:16:26.002 "trsvcid": "4420" 00:16:26.002 }, 00:16:26.002 "peer_address": { 00:16:26.002 "trtype": "RDMA", 00:16:26.002 "adrfam": "IPv4", 00:16:26.002 "traddr": "192.168.100.8", 00:16:26.002 "trsvcid": "49905" 00:16:26.002 }, 00:16:26.002 "auth": { 00:16:26.002 "state": "completed", 00:16:26.002 "digest": "sha256", 00:16:26.002 "dhgroup": "null" 00:16:26.002 } 00:16:26.002 } 00:16:26.002 ]' 00:16:26.002 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.003 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.003 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.003 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:26.003 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.260 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.260 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.261 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.261 18:08:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:16:26.826 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.127 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.385 00:16:27.385 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.385 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.385 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.643 { 00:16:27.643 "cntlid": 3, 00:16:27.643 "qid": 0, 00:16:27.643 "state": "enabled", 00:16:27.643 "thread": "nvmf_tgt_poll_group_000", 00:16:27.643 "listen_address": { 00:16:27.643 "trtype": "RDMA", 00:16:27.643 "adrfam": "IPv4", 00:16:27.643 "traddr": "192.168.100.8", 00:16:27.643 "trsvcid": "4420" 00:16:27.643 }, 00:16:27.643 "peer_address": { 00:16:27.643 "trtype": "RDMA", 00:16:27.643 "adrfam": "IPv4", 00:16:27.643 "traddr": "192.168.100.8", 00:16:27.643 "trsvcid": "39038" 00:16:27.643 }, 00:16:27.643 "auth": { 00:16:27.643 "state": "completed", 00:16:27.643 "digest": "sha256", 00:16:27.643 "dhgroup": "null" 00:16:27.643 } 00:16:27.643 } 00:16:27.643 ]' 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.643 18:08:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.643 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.643 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.901 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.901 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.901 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.901 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:16:28.468 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.726 18:08:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.984 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.984 00:16:29.241 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.241 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.241 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.241 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.242 { 00:16:29.242 "cntlid": 5, 00:16:29.242 "qid": 0, 00:16:29.242 "state": "enabled", 00:16:29.242 "thread": "nvmf_tgt_poll_group_000", 00:16:29.242 "listen_address": { 00:16:29.242 "trtype": "RDMA", 00:16:29.242 "adrfam": "IPv4", 00:16:29.242 "traddr": "192.168.100.8", 00:16:29.242 "trsvcid": "4420" 00:16:29.242 }, 00:16:29.242 "peer_address": { 00:16:29.242 "trtype": "RDMA", 00:16:29.242 "adrfam": "IPv4", 00:16:29.242 "traddr": "192.168.100.8", 00:16:29.242 "trsvcid": "33402" 00:16:29.242 }, 00:16:29.242 "auth": { 00:16:29.242 "state": "completed", 00:16:29.242 "digest": "sha256", 00:16:29.242 "dhgroup": "null" 00:16:29.242 } 00:16:29.242 } 00:16:29.242 ]' 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.242 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.500 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.500 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.500 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.500 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.500 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.500 18:08:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.435 18:08:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.694 00:16:30.694 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.694 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.694 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.952 { 00:16:30.952 "cntlid": 7, 00:16:30.952 "qid": 0, 00:16:30.952 "state": "enabled", 00:16:30.952 "thread": "nvmf_tgt_poll_group_000", 00:16:30.952 "listen_address": { 00:16:30.952 "trtype": "RDMA", 00:16:30.952 "adrfam": "IPv4", 00:16:30.952 "traddr": "192.168.100.8", 00:16:30.952 "trsvcid": "4420" 00:16:30.952 }, 00:16:30.952 "peer_address": { 00:16:30.952 "trtype": "RDMA", 00:16:30.952 "adrfam": "IPv4", 00:16:30.952 "traddr": "192.168.100.8", 00:16:30.952 "trsvcid": "38194" 00:16:30.952 }, 00:16:30.952 "auth": { 00:16:30.952 "state": "completed", 00:16:30.952 "digest": "sha256", 00:16:30.952 "dhgroup": "null" 00:16:30.952 } 00:16:30.952 } 00:16:30.952 ]' 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.952 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.210 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.210 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.210 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.210 18:08:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:16:31.777 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.034 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.292 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.550 00:16:32.550 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.550 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.550 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.550 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.550 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.550 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.551 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.551 18:08:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.551 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.551 { 00:16:32.551 "cntlid": 9, 00:16:32.551 "qid": 0, 00:16:32.551 "state": "enabled", 00:16:32.551 "thread": "nvmf_tgt_poll_group_000", 00:16:32.551 "listen_address": { 00:16:32.551 "trtype": "RDMA", 00:16:32.551 "adrfam": "IPv4", 00:16:32.551 "traddr": "192.168.100.8", 00:16:32.551 "trsvcid": "4420" 00:16:32.551 }, 00:16:32.551 "peer_address": { 00:16:32.551 "trtype": "RDMA", 00:16:32.551 "adrfam": "IPv4", 00:16:32.551 "traddr": "192.168.100.8", 00:16:32.551 "trsvcid": "56159" 00:16:32.551 }, 00:16:32.551 "auth": { 00:16:32.551 "state": "completed", 00:16:32.551 "digest": "sha256", 00:16:32.551 "dhgroup": "ffdhe2048" 00:16:32.551 } 00:16:32.551 } 00:16:32.551 ]' 00:16:32.551 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.810 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.810 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.810 18:08:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.810 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.810 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.810 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.810 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.067 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:16:33.633 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.633 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:33.633 18:08:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.633 18:08:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.633 18:08:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.634 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.634 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.634 18:08:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.891 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.150 00:16:34.150 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.150 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.150 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.408 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.408 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.408 18:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.408 18:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.408 18:08:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.408 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.408 { 00:16:34.408 "cntlid": 11, 00:16:34.409 "qid": 0, 00:16:34.409 "state": "enabled", 00:16:34.409 "thread": "nvmf_tgt_poll_group_000", 00:16:34.409 "listen_address": { 00:16:34.409 "trtype": "RDMA", 00:16:34.409 "adrfam": "IPv4", 00:16:34.409 "traddr": "192.168.100.8", 00:16:34.409 "trsvcid": "4420" 00:16:34.409 }, 00:16:34.409 "peer_address": { 00:16:34.409 "trtype": "RDMA", 00:16:34.409 "adrfam": "IPv4", 00:16:34.409 "traddr": "192.168.100.8", 00:16:34.409 "trsvcid": "46997" 00:16:34.409 }, 00:16:34.409 "auth": { 00:16:34.409 "state": "completed", 00:16:34.409 "digest": "sha256", 00:16:34.409 "dhgroup": "ffdhe2048" 00:16:34.409 } 00:16:34.409 } 00:16:34.409 ]' 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.409 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.667 18:08:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:16:35.249 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.249 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:35.249 18:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.249 18:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.508 18:08:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.767 00:16:35.767 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.767 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.767 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.025 { 00:16:36.025 "cntlid": 13, 00:16:36.025 "qid": 0, 00:16:36.025 "state": "enabled", 00:16:36.025 "thread": "nvmf_tgt_poll_group_000", 00:16:36.025 "listen_address": { 00:16:36.025 "trtype": "RDMA", 00:16:36.025 "adrfam": "IPv4", 00:16:36.025 "traddr": "192.168.100.8", 00:16:36.025 "trsvcid": "4420" 00:16:36.025 }, 00:16:36.025 "peer_address": { 00:16:36.025 "trtype": "RDMA", 00:16:36.025 "adrfam": "IPv4", 00:16:36.025 "traddr": "192.168.100.8", 00:16:36.025 "trsvcid": "51297" 00:16:36.025 }, 00:16:36.025 "auth": { 00:16:36.025 "state": "completed", 00:16:36.025 "digest": "sha256", 00:16:36.025 "dhgroup": "ffdhe2048" 00:16:36.025 } 00:16:36.025 } 00:16:36.025 ]' 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.025 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.026 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.026 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.026 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.294 18:08:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:16:36.861 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.120 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.379 00:16:37.379 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.379 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.379 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.639 { 00:16:37.639 "cntlid": 15, 00:16:37.639 "qid": 0, 00:16:37.639 "state": "enabled", 00:16:37.639 "thread": "nvmf_tgt_poll_group_000", 00:16:37.639 "listen_address": { 00:16:37.639 "trtype": "RDMA", 00:16:37.639 "adrfam": "IPv4", 00:16:37.639 "traddr": "192.168.100.8", 00:16:37.639 "trsvcid": "4420" 00:16:37.639 }, 00:16:37.639 "peer_address": { 00:16:37.639 "trtype": "RDMA", 00:16:37.639 "adrfam": "IPv4", 00:16:37.639 "traddr": "192.168.100.8", 00:16:37.639 "trsvcid": "54105" 00:16:37.639 }, 00:16:37.639 "auth": { 00:16:37.639 "state": "completed", 00:16:37.639 "digest": "sha256", 00:16:37.639 "dhgroup": "ffdhe2048" 00:16:37.639 } 00:16:37.639 } 00:16:37.639 ]' 00:16:37.639 18:08:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.639 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.639 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.899 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.899 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.899 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.899 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.899 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.899 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:16:38.837 18:08:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.837 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:38.837 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.837 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.837 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.838 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.097 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.356 { 00:16:39.356 "cntlid": 17, 00:16:39.356 "qid": 0, 00:16:39.356 "state": "enabled", 00:16:39.356 "thread": "nvmf_tgt_poll_group_000", 00:16:39.356 "listen_address": { 00:16:39.356 "trtype": "RDMA", 00:16:39.356 "adrfam": "IPv4", 00:16:39.356 "traddr": "192.168.100.8", 00:16:39.356 "trsvcid": "4420" 00:16:39.356 }, 00:16:39.356 "peer_address": { 00:16:39.356 "trtype": "RDMA", 00:16:39.356 "adrfam": "IPv4", 00:16:39.356 "traddr": "192.168.100.8", 00:16:39.356 "trsvcid": "50518" 00:16:39.356 }, 00:16:39.356 "auth": { 00:16:39.356 "state": "completed", 00:16:39.356 "digest": "sha256", 00:16:39.356 "dhgroup": "ffdhe3072" 00:16:39.356 } 00:16:39.356 } 00:16:39.356 ]' 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.356 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.614 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.614 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.614 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.614 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.614 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.614 18:08:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:16:40.245 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.526 18:08:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.785 00:16:40.785 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.785 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.785 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.043 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.044 { 00:16:41.044 "cntlid": 19, 00:16:41.044 "qid": 0, 00:16:41.044 "state": "enabled", 00:16:41.044 "thread": "nvmf_tgt_poll_group_000", 00:16:41.044 "listen_address": { 00:16:41.044 "trtype": "RDMA", 00:16:41.044 "adrfam": "IPv4", 00:16:41.044 "traddr": "192.168.100.8", 00:16:41.044 "trsvcid": "4420" 00:16:41.044 }, 00:16:41.044 "peer_address": { 00:16:41.044 "trtype": "RDMA", 00:16:41.044 "adrfam": "IPv4", 00:16:41.044 "traddr": "192.168.100.8", 00:16:41.044 "trsvcid": "45679" 00:16:41.044 }, 00:16:41.044 "auth": { 00:16:41.044 "state": "completed", 00:16:41.044 "digest": "sha256", 00:16:41.044 "dhgroup": "ffdhe3072" 00:16:41.044 } 00:16:41.044 } 00:16:41.044 ]' 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.044 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.302 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.302 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.302 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.302 18:08:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:16:41.869 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.127 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.386 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.644 00:16:42.644 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.644 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.644 18:08:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.644 { 00:16:42.644 "cntlid": 21, 00:16:42.644 "qid": 0, 00:16:42.644 "state": "enabled", 00:16:42.644 "thread": "nvmf_tgt_poll_group_000", 00:16:42.644 "listen_address": { 00:16:42.644 "trtype": "RDMA", 00:16:42.644 "adrfam": "IPv4", 00:16:42.644 "traddr": "192.168.100.8", 00:16:42.644 "trsvcid": "4420" 00:16:42.644 }, 00:16:42.644 "peer_address": { 00:16:42.644 "trtype": "RDMA", 00:16:42.644 "adrfam": "IPv4", 00:16:42.644 "traddr": "192.168.100.8", 00:16:42.644 "trsvcid": "45755" 00:16:42.644 }, 00:16:42.644 "auth": { 00:16:42.644 "state": "completed", 00:16:42.644 "digest": "sha256", 00:16:42.644 "dhgroup": "ffdhe3072" 00:16:42.644 } 00:16:42.644 } 00:16:42.644 ]' 00:16:42.644 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.902 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.160 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:16:43.725 18:08:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.726 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.984 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.243 00:16:44.243 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.243 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.243 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.502 { 00:16:44.502 "cntlid": 23, 00:16:44.502 "qid": 0, 00:16:44.502 "state": "enabled", 00:16:44.502 "thread": "nvmf_tgt_poll_group_000", 00:16:44.502 "listen_address": { 00:16:44.502 "trtype": "RDMA", 00:16:44.502 "adrfam": "IPv4", 00:16:44.502 "traddr": "192.168.100.8", 00:16:44.502 "trsvcid": "4420" 00:16:44.502 }, 00:16:44.502 "peer_address": { 00:16:44.502 "trtype": "RDMA", 00:16:44.502 "adrfam": "IPv4", 00:16:44.502 "traddr": "192.168.100.8", 00:16:44.502 "trsvcid": "54830" 00:16:44.502 }, 00:16:44.502 "auth": { 00:16:44.502 "state": "completed", 00:16:44.502 "digest": "sha256", 00:16:44.502 "dhgroup": "ffdhe3072" 00:16:44.502 } 00:16:44.502 } 00:16:44.502 ]' 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.502 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.761 18:08:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:16:45.328 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.328 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:45.328 18:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.328 18:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.328 18:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.587 18:08:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.846 00:16:45.846 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.846 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.846 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.104 { 00:16:46.104 "cntlid": 25, 00:16:46.104 "qid": 0, 00:16:46.104 "state": "enabled", 00:16:46.104 "thread": "nvmf_tgt_poll_group_000", 00:16:46.104 "listen_address": { 00:16:46.104 "trtype": "RDMA", 00:16:46.104 "adrfam": "IPv4", 00:16:46.104 "traddr": "192.168.100.8", 00:16:46.104 "trsvcid": "4420" 00:16:46.104 }, 00:16:46.104 "peer_address": { 00:16:46.104 "trtype": "RDMA", 00:16:46.104 "adrfam": "IPv4", 00:16:46.104 "traddr": "192.168.100.8", 00:16:46.104 "trsvcid": "38918" 00:16:46.104 }, 00:16:46.104 "auth": { 00:16:46.104 "state": "completed", 00:16:46.104 "digest": "sha256", 00:16:46.104 "dhgroup": "ffdhe4096" 00:16:46.104 } 00:16:46.104 } 00:16:46.104 ]' 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.104 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.364 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.364 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.364 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.364 18:08:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:16:46.932 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.191 18:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.450 18:08:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.450 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.450 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.709 00:16:47.710 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.710 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.710 18:08:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.710 { 00:16:47.710 "cntlid": 27, 00:16:47.710 "qid": 0, 00:16:47.710 "state": "enabled", 00:16:47.710 "thread": "nvmf_tgt_poll_group_000", 00:16:47.710 "listen_address": { 00:16:47.710 "trtype": "RDMA", 00:16:47.710 "adrfam": "IPv4", 00:16:47.710 "traddr": "192.168.100.8", 00:16:47.710 "trsvcid": "4420" 00:16:47.710 }, 00:16:47.710 "peer_address": { 00:16:47.710 "trtype": "RDMA", 00:16:47.710 "adrfam": "IPv4", 00:16:47.710 "traddr": "192.168.100.8", 00:16:47.710 "trsvcid": "57060" 00:16:47.710 }, 00:16:47.710 "auth": { 00:16:47.710 "state": "completed", 00:16:47.710 "digest": "sha256", 00:16:47.710 "dhgroup": "ffdhe4096" 00:16:47.710 } 00:16:47.710 } 00:16:47.710 ]' 00:16:47.710 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.969 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.228 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:16:48.796 18:08:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.796 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.055 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.314 00:16:49.314 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.314 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.314 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.572 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.572 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.572 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.572 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.572 18:08:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.572 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.572 { 00:16:49.572 "cntlid": 29, 00:16:49.572 "qid": 0, 00:16:49.572 "state": "enabled", 00:16:49.572 "thread": "nvmf_tgt_poll_group_000", 00:16:49.572 "listen_address": { 00:16:49.572 "trtype": "RDMA", 00:16:49.572 "adrfam": "IPv4", 00:16:49.572 "traddr": "192.168.100.8", 00:16:49.572 "trsvcid": "4420" 00:16:49.572 }, 00:16:49.572 "peer_address": { 00:16:49.572 "trtype": "RDMA", 00:16:49.573 "adrfam": "IPv4", 00:16:49.573 "traddr": "192.168.100.8", 00:16:49.573 "trsvcid": "36985" 00:16:49.573 }, 00:16:49.573 "auth": { 00:16:49.573 "state": "completed", 00:16:49.573 "digest": "sha256", 00:16:49.573 "dhgroup": "ffdhe4096" 00:16:49.573 } 00:16:49.573 } 00:16:49.573 ]' 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.573 18:08:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.831 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:16:50.398 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.398 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:50.398 18:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.398 18:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.398 18:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.399 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.399 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.399 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.657 18:08:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.916 00:16:50.916 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.916 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.916 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.174 { 00:16:51.174 "cntlid": 31, 00:16:51.174 "qid": 0, 00:16:51.174 "state": "enabled", 00:16:51.174 "thread": "nvmf_tgt_poll_group_000", 00:16:51.174 "listen_address": { 00:16:51.174 "trtype": "RDMA", 00:16:51.174 "adrfam": "IPv4", 00:16:51.174 "traddr": "192.168.100.8", 00:16:51.174 "trsvcid": "4420" 00:16:51.174 }, 00:16:51.174 "peer_address": { 00:16:51.174 "trtype": "RDMA", 00:16:51.174 "adrfam": "IPv4", 00:16:51.174 "traddr": "192.168.100.8", 00:16:51.174 "trsvcid": "53642" 00:16:51.174 }, 00:16:51.174 "auth": { 00:16:51.174 "state": "completed", 00:16:51.174 "digest": "sha256", 00:16:51.174 "dhgroup": "ffdhe4096" 00:16:51.174 } 00:16:51.174 } 00:16:51.174 ]' 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.174 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.433 18:08:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.998 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.256 18:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.257 18:08:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.257 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.257 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.515 00:16:52.773 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.773 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.773 18:08:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.773 { 00:16:52.773 "cntlid": 33, 00:16:52.773 "qid": 0, 00:16:52.773 "state": "enabled", 00:16:52.773 "thread": "nvmf_tgt_poll_group_000", 00:16:52.773 "listen_address": { 00:16:52.773 "trtype": "RDMA", 00:16:52.773 "adrfam": "IPv4", 00:16:52.773 "traddr": "192.168.100.8", 00:16:52.773 "trsvcid": "4420" 00:16:52.773 }, 00:16:52.773 "peer_address": { 00:16:52.773 "trtype": "RDMA", 00:16:52.773 "adrfam": "IPv4", 00:16:52.773 "traddr": "192.168.100.8", 00:16:52.773 "trsvcid": "33656" 00:16:52.773 }, 00:16:52.773 "auth": { 00:16:52.773 "state": "completed", 00:16:52.773 "digest": "sha256", 00:16:52.773 "dhgroup": "ffdhe6144" 00:16:52.773 } 00:16:52.773 } 00:16:52.773 ]' 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.773 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.031 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.031 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.031 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.031 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.031 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.031 18:08:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:16:53.675 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.933 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.499 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.499 { 00:16:54.499 "cntlid": 35, 00:16:54.499 "qid": 0, 00:16:54.499 "state": "enabled", 00:16:54.499 "thread": "nvmf_tgt_poll_group_000", 00:16:54.499 "listen_address": { 00:16:54.499 "trtype": "RDMA", 00:16:54.499 "adrfam": "IPv4", 00:16:54.499 "traddr": "192.168.100.8", 00:16:54.499 "trsvcid": "4420" 00:16:54.499 }, 00:16:54.499 "peer_address": { 00:16:54.499 "trtype": "RDMA", 00:16:54.499 "adrfam": "IPv4", 00:16:54.499 "traddr": "192.168.100.8", 00:16:54.499 "trsvcid": "39459" 00:16:54.499 }, 00:16:54.499 "auth": { 00:16:54.499 "state": "completed", 00:16:54.499 "digest": "sha256", 00:16:54.499 "dhgroup": "ffdhe6144" 00:16:54.499 } 00:16:54.499 } 00:16:54.499 ]' 00:16:54.499 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.978 18:08:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.978 18:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.542 18:08:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.801 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.060 00:16:56.060 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.060 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.060 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.317 { 00:16:56.317 "cntlid": 37, 00:16:56.317 "qid": 0, 00:16:56.317 "state": "enabled", 00:16:56.317 "thread": "nvmf_tgt_poll_group_000", 00:16:56.317 "listen_address": { 00:16:56.317 "trtype": "RDMA", 00:16:56.317 "adrfam": "IPv4", 00:16:56.317 "traddr": "192.168.100.8", 00:16:56.317 "trsvcid": "4420" 00:16:56.317 }, 00:16:56.317 "peer_address": { 00:16:56.317 "trtype": "RDMA", 00:16:56.317 "adrfam": "IPv4", 00:16:56.317 "traddr": "192.168.100.8", 00:16:56.317 "trsvcid": "38397" 00:16:56.317 }, 00:16:56.317 "auth": { 00:16:56.317 "state": "completed", 00:16:56.317 "digest": "sha256", 00:16:56.317 "dhgroup": "ffdhe6144" 00:16:56.317 } 00:16:56.317 } 00:16:56.317 ]' 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.317 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.575 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.575 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.575 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.575 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.576 18:08:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.512 18:08:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.079 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.079 { 00:16:58.079 "cntlid": 39, 00:16:58.079 "qid": 0, 00:16:58.079 "state": "enabled", 00:16:58.079 "thread": "nvmf_tgt_poll_group_000", 00:16:58.079 "listen_address": { 00:16:58.079 "trtype": "RDMA", 00:16:58.079 "adrfam": "IPv4", 00:16:58.079 "traddr": "192.168.100.8", 00:16:58.079 "trsvcid": "4420" 00:16:58.079 }, 00:16:58.079 "peer_address": { 00:16:58.079 "trtype": "RDMA", 00:16:58.079 "adrfam": "IPv4", 00:16:58.079 "traddr": "192.168.100.8", 00:16:58.079 "trsvcid": "40545" 00:16:58.079 }, 00:16:58.079 "auth": { 00:16:58.079 "state": "completed", 00:16:58.079 "digest": "sha256", 00:16:58.079 "dhgroup": "ffdhe6144" 00:16:58.079 } 00:16:58.079 } 00:16:58.079 ]' 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.079 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.338 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.338 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.338 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.338 18:08:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:16:58.904 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.162 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.420 18:08:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.679 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.936 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.936 { 00:16:59.936 "cntlid": 41, 00:16:59.936 "qid": 0, 00:16:59.936 "state": "enabled", 00:16:59.936 "thread": "nvmf_tgt_poll_group_000", 00:16:59.936 "listen_address": { 00:16:59.936 "trtype": "RDMA", 00:16:59.936 "adrfam": "IPv4", 00:16:59.936 "traddr": "192.168.100.8", 00:16:59.936 "trsvcid": "4420" 00:16:59.936 }, 00:16:59.936 "peer_address": { 00:16:59.936 "trtype": "RDMA", 00:16:59.936 "adrfam": "IPv4", 00:16:59.936 "traddr": "192.168.100.8", 00:16:59.936 "trsvcid": "56759" 00:16:59.937 }, 00:16:59.937 "auth": { 00:16:59.937 "state": "completed", 00:16:59.937 "digest": "sha256", 00:16:59.937 "dhgroup": "ffdhe8192" 00:16:59.937 } 00:16:59.937 } 00:16:59.937 ]' 00:16:59.937 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.937 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.937 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.195 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.195 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.195 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.195 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.195 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.195 18:09:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.130 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.695 00:17:01.695 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.695 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.695 18:09:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.954 { 00:17:01.954 "cntlid": 43, 00:17:01.954 "qid": 0, 00:17:01.954 "state": "enabled", 00:17:01.954 "thread": "nvmf_tgt_poll_group_000", 00:17:01.954 "listen_address": { 00:17:01.954 "trtype": "RDMA", 00:17:01.954 "adrfam": "IPv4", 00:17:01.954 "traddr": "192.168.100.8", 00:17:01.954 "trsvcid": "4420" 00:17:01.954 }, 00:17:01.954 "peer_address": { 00:17:01.954 "trtype": "RDMA", 00:17:01.954 "adrfam": "IPv4", 00:17:01.954 "traddr": "192.168.100.8", 00:17:01.954 "trsvcid": "32896" 00:17:01.954 }, 00:17:01.954 "auth": { 00:17:01.954 "state": "completed", 00:17:01.954 "digest": "sha256", 00:17:01.954 "dhgroup": "ffdhe8192" 00:17:01.954 } 00:17:01.954 } 00:17:01.954 ]' 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.954 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.213 18:09:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:02.779 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.049 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.622 00:17:03.622 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.622 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.622 18:09:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.880 { 00:17:03.880 "cntlid": 45, 00:17:03.880 "qid": 0, 00:17:03.880 "state": "enabled", 00:17:03.880 "thread": "nvmf_tgt_poll_group_000", 00:17:03.880 "listen_address": { 00:17:03.880 "trtype": "RDMA", 00:17:03.880 "adrfam": "IPv4", 00:17:03.880 "traddr": "192.168.100.8", 00:17:03.880 "trsvcid": "4420" 00:17:03.880 }, 00:17:03.880 "peer_address": { 00:17:03.880 "trtype": "RDMA", 00:17:03.880 "adrfam": "IPv4", 00:17:03.880 "traddr": "192.168.100.8", 00:17:03.880 "trsvcid": "42136" 00:17:03.880 }, 00:17:03.880 "auth": { 00:17:03.880 "state": "completed", 00:17:03.880 "digest": "sha256", 00:17:03.880 "dhgroup": "ffdhe8192" 00:17:03.880 } 00:17:03.880 } 00:17:03.880 ]' 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.880 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.138 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:04.705 18:09:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.705 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.964 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.531 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.531 { 00:17:05.531 "cntlid": 47, 00:17:05.531 "qid": 0, 00:17:05.531 "state": "enabled", 00:17:05.531 "thread": "nvmf_tgt_poll_group_000", 00:17:05.531 "listen_address": { 00:17:05.531 "trtype": "RDMA", 00:17:05.531 "adrfam": "IPv4", 00:17:05.531 "traddr": "192.168.100.8", 00:17:05.531 "trsvcid": "4420" 00:17:05.531 }, 00:17:05.531 "peer_address": { 00:17:05.531 "trtype": "RDMA", 00:17:05.531 "adrfam": "IPv4", 00:17:05.531 "traddr": "192.168.100.8", 00:17:05.531 "trsvcid": "54491" 00:17:05.531 }, 00:17:05.531 "auth": { 00:17:05.531 "state": "completed", 00:17:05.531 "digest": "sha256", 00:17:05.531 "dhgroup": "ffdhe8192" 00:17:05.531 } 00:17:05.531 } 00:17:05.531 ]' 00:17:05.531 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.789 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.789 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.789 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.789 18:09:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.789 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.789 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.789 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.048 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.614 18:09:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.872 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.159 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.159 18:09:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.160 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.160 { 00:17:07.160 "cntlid": 49, 00:17:07.160 "qid": 0, 00:17:07.160 "state": "enabled", 00:17:07.160 "thread": "nvmf_tgt_poll_group_000", 00:17:07.160 "listen_address": { 00:17:07.160 "trtype": "RDMA", 00:17:07.160 "adrfam": "IPv4", 00:17:07.160 "traddr": "192.168.100.8", 00:17:07.160 "trsvcid": "4420" 00:17:07.160 }, 00:17:07.160 "peer_address": { 00:17:07.160 "trtype": "RDMA", 00:17:07.160 "adrfam": "IPv4", 00:17:07.160 "traddr": "192.168.100.8", 00:17:07.160 "trsvcid": "53661" 00:17:07.160 }, 00:17:07.160 "auth": { 00:17:07.160 "state": "completed", 00:17:07.160 "digest": "sha384", 00:17:07.160 "dhgroup": "null" 00:17:07.160 } 00:17:07.160 } 00:17:07.160 ]' 00:17:07.160 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.419 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.677 18:09:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.245 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.503 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.761 00:17:08.761 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.761 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.761 18:09:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.020 { 00:17:09.020 "cntlid": 51, 00:17:09.020 "qid": 0, 00:17:09.020 "state": "enabled", 00:17:09.020 "thread": "nvmf_tgt_poll_group_000", 00:17:09.020 "listen_address": { 00:17:09.020 "trtype": "RDMA", 00:17:09.020 "adrfam": "IPv4", 00:17:09.020 "traddr": "192.168.100.8", 00:17:09.020 "trsvcid": "4420" 00:17:09.020 }, 00:17:09.020 "peer_address": { 00:17:09.020 "trtype": "RDMA", 00:17:09.020 "adrfam": "IPv4", 00:17:09.020 "traddr": "192.168.100.8", 00:17:09.020 "trsvcid": "49893" 00:17:09.020 }, 00:17:09.020 "auth": { 00:17:09.020 "state": "completed", 00:17:09.020 "digest": "sha384", 00:17:09.020 "dhgroup": "null" 00:17:09.020 } 00:17:09.020 } 00:17:09.020 ]' 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.020 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.279 18:09:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.846 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.104 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.362 00:17:10.362 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.362 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.362 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.620 { 00:17:10.620 "cntlid": 53, 00:17:10.620 "qid": 0, 00:17:10.620 "state": "enabled", 00:17:10.620 "thread": "nvmf_tgt_poll_group_000", 00:17:10.620 "listen_address": { 00:17:10.620 "trtype": "RDMA", 00:17:10.620 "adrfam": "IPv4", 00:17:10.620 "traddr": "192.168.100.8", 00:17:10.620 "trsvcid": "4420" 00:17:10.620 }, 00:17:10.620 "peer_address": { 00:17:10.620 "trtype": "RDMA", 00:17:10.620 "adrfam": "IPv4", 00:17:10.620 "traddr": "192.168.100.8", 00:17:10.620 "trsvcid": "33001" 00:17:10.620 }, 00:17:10.620 "auth": { 00:17:10.620 "state": "completed", 00:17:10.620 "digest": "sha384", 00:17:10.620 "dhgroup": "null" 00:17:10.620 } 00:17:10.620 } 00:17:10.620 ]' 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.620 18:09:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.878 18:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:11.444 18:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.444 18:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:11.444 18:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.444 18:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.444 18:09:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.702 18:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.702 18:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.702 18:09:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.702 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.960 00:17:11.960 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.960 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.960 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.218 { 00:17:12.218 "cntlid": 55, 00:17:12.218 "qid": 0, 00:17:12.218 "state": "enabled", 00:17:12.218 "thread": "nvmf_tgt_poll_group_000", 00:17:12.218 "listen_address": { 00:17:12.218 "trtype": "RDMA", 00:17:12.218 "adrfam": "IPv4", 00:17:12.218 "traddr": "192.168.100.8", 00:17:12.218 "trsvcid": "4420" 00:17:12.218 }, 00:17:12.218 "peer_address": { 00:17:12.218 "trtype": "RDMA", 00:17:12.218 "adrfam": "IPv4", 00:17:12.218 "traddr": "192.168.100.8", 00:17:12.218 "trsvcid": "57042" 00:17:12.218 }, 00:17:12.218 "auth": { 00:17:12.218 "state": "completed", 00:17:12.218 "digest": "sha384", 00:17:12.218 "dhgroup": "null" 00:17:12.218 } 00:17:12.218 } 00:17:12.218 ]' 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.218 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.476 18:09:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:13.059 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.320 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.578 00:17:13.579 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.579 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.579 18:09:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.836 { 00:17:13.836 "cntlid": 57, 00:17:13.836 "qid": 0, 00:17:13.836 "state": "enabled", 00:17:13.836 "thread": "nvmf_tgt_poll_group_000", 00:17:13.836 "listen_address": { 00:17:13.836 "trtype": "RDMA", 00:17:13.836 "adrfam": "IPv4", 00:17:13.836 "traddr": "192.168.100.8", 00:17:13.836 "trsvcid": "4420" 00:17:13.836 }, 00:17:13.836 "peer_address": { 00:17:13.836 "trtype": "RDMA", 00:17:13.836 "adrfam": "IPv4", 00:17:13.836 "traddr": "192.168.100.8", 00:17:13.836 "trsvcid": "56216" 00:17:13.836 }, 00:17:13.836 "auth": { 00:17:13.836 "state": "completed", 00:17:13.836 "digest": "sha384", 00:17:13.836 "dhgroup": "ffdhe2048" 00:17:13.836 } 00:17:13.836 } 00:17:13.836 ]' 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.836 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.093 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.093 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.093 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.093 18:09:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:14.659 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.918 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.177 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.436 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.436 { 00:17:15.436 "cntlid": 59, 00:17:15.436 "qid": 0, 00:17:15.436 "state": "enabled", 00:17:15.436 "thread": "nvmf_tgt_poll_group_000", 00:17:15.436 "listen_address": { 00:17:15.436 "trtype": "RDMA", 00:17:15.436 "adrfam": "IPv4", 00:17:15.436 "traddr": "192.168.100.8", 00:17:15.436 "trsvcid": "4420" 00:17:15.436 }, 00:17:15.436 "peer_address": { 00:17:15.436 "trtype": "RDMA", 00:17:15.436 "adrfam": "IPv4", 00:17:15.436 "traddr": "192.168.100.8", 00:17:15.436 "trsvcid": "50700" 00:17:15.436 }, 00:17:15.436 "auth": { 00:17:15.436 "state": "completed", 00:17:15.436 "digest": "sha384", 00:17:15.436 "dhgroup": "ffdhe2048" 00:17:15.436 } 00:17:15.436 } 00:17:15.436 ]' 00:17:15.436 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.695 18:09:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.953 18:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.521 18:09:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.780 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.038 00:17:17.038 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.038 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.038 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.297 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.297 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.297 18:09:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.297 18:09:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.297 18:09:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.297 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.297 { 00:17:17.297 "cntlid": 61, 00:17:17.297 "qid": 0, 00:17:17.297 "state": "enabled", 00:17:17.297 "thread": "nvmf_tgt_poll_group_000", 00:17:17.297 "listen_address": { 00:17:17.297 "trtype": "RDMA", 00:17:17.297 "adrfam": "IPv4", 00:17:17.297 "traddr": "192.168.100.8", 00:17:17.298 "trsvcid": "4420" 00:17:17.298 }, 00:17:17.298 "peer_address": { 00:17:17.298 "trtype": "RDMA", 00:17:17.298 "adrfam": "IPv4", 00:17:17.298 "traddr": "192.168.100.8", 00:17:17.298 "trsvcid": "47050" 00:17:17.298 }, 00:17:17.298 "auth": { 00:17:17.298 "state": "completed", 00:17:17.298 "digest": "sha384", 00:17:17.298 "dhgroup": "ffdhe2048" 00:17:17.298 } 00:17:17.298 } 00:17:17.298 ]' 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.298 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.557 18:09:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.124 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.383 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.642 00:17:18.642 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.642 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.642 18:09:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.018 { 00:17:19.018 "cntlid": 63, 00:17:19.018 "qid": 0, 00:17:19.018 "state": "enabled", 00:17:19.018 "thread": "nvmf_tgt_poll_group_000", 00:17:19.018 "listen_address": { 00:17:19.018 "trtype": "RDMA", 00:17:19.018 "adrfam": "IPv4", 00:17:19.018 "traddr": "192.168.100.8", 00:17:19.018 "trsvcid": "4420" 00:17:19.018 }, 00:17:19.018 "peer_address": { 00:17:19.018 "trtype": "RDMA", 00:17:19.018 "adrfam": "IPv4", 00:17:19.018 "traddr": "192.168.100.8", 00:17:19.018 "trsvcid": "50588" 00:17:19.018 }, 00:17:19.018 "auth": { 00:17:19.018 "state": "completed", 00:17:19.018 "digest": "sha384", 00:17:19.018 "dhgroup": "ffdhe2048" 00:17:19.018 } 00:17:19.018 } 00:17:19.018 ]' 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.018 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.276 18:09:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.840 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.098 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.355 00:17:20.355 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.355 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.355 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.613 { 00:17:20.613 "cntlid": 65, 00:17:20.613 "qid": 0, 00:17:20.613 "state": "enabled", 00:17:20.613 "thread": "nvmf_tgt_poll_group_000", 00:17:20.613 "listen_address": { 00:17:20.613 "trtype": "RDMA", 00:17:20.613 "adrfam": "IPv4", 00:17:20.613 "traddr": "192.168.100.8", 00:17:20.613 "trsvcid": "4420" 00:17:20.613 }, 00:17:20.613 "peer_address": { 00:17:20.613 "trtype": "RDMA", 00:17:20.613 "adrfam": "IPv4", 00:17:20.613 "traddr": "192.168.100.8", 00:17:20.613 "trsvcid": "39916" 00:17:20.613 }, 00:17:20.613 "auth": { 00:17:20.613 "state": "completed", 00:17:20.613 "digest": "sha384", 00:17:20.613 "dhgroup": "ffdhe3072" 00:17:20.613 } 00:17:20.613 } 00:17:20.613 ]' 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.613 18:09:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.871 18:09:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:21.436 18:09:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.695 18:09:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.695 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.954 00:17:21.954 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.954 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.954 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.215 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.215 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.215 18:09:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.216 18:09:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.216 18:09:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.216 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.216 { 00:17:22.216 "cntlid": 67, 00:17:22.216 "qid": 0, 00:17:22.216 "state": "enabled", 00:17:22.216 "thread": "nvmf_tgt_poll_group_000", 00:17:22.216 "listen_address": { 00:17:22.216 "trtype": "RDMA", 00:17:22.216 "adrfam": "IPv4", 00:17:22.216 "traddr": "192.168.100.8", 00:17:22.216 "trsvcid": "4420" 00:17:22.216 }, 00:17:22.216 "peer_address": { 00:17:22.216 "trtype": "RDMA", 00:17:22.216 "adrfam": "IPv4", 00:17:22.216 "traddr": "192.168.100.8", 00:17:22.216 "trsvcid": "60740" 00:17:22.216 }, 00:17:22.216 "auth": { 00:17:22.216 "state": "completed", 00:17:22.216 "digest": "sha384", 00:17:22.216 "dhgroup": "ffdhe3072" 00:17:22.216 } 00:17:22.216 } 00:17:22.216 ]' 00:17:22.216 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.216 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.216 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.474 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.474 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.474 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.474 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.474 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.474 18:09:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.409 18:09:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.668 00:17:23.668 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.668 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.668 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.927 { 00:17:23.927 "cntlid": 69, 00:17:23.927 "qid": 0, 00:17:23.927 "state": "enabled", 00:17:23.927 "thread": "nvmf_tgt_poll_group_000", 00:17:23.927 "listen_address": { 00:17:23.927 "trtype": "RDMA", 00:17:23.927 "adrfam": "IPv4", 00:17:23.927 "traddr": "192.168.100.8", 00:17:23.927 "trsvcid": "4420" 00:17:23.927 }, 00:17:23.927 "peer_address": { 00:17:23.927 "trtype": "RDMA", 00:17:23.927 "adrfam": "IPv4", 00:17:23.927 "traddr": "192.168.100.8", 00:17:23.927 "trsvcid": "55288" 00:17:23.927 }, 00:17:23.927 "auth": { 00:17:23.927 "state": "completed", 00:17:23.927 "digest": "sha384", 00:17:23.927 "dhgroup": "ffdhe3072" 00:17:23.927 } 00:17:23.927 } 00:17:23.927 ]' 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.927 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.185 18:09:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:24.752 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.010 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.268 00:17:25.268 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.268 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.268 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.526 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.526 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.526 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.526 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.526 18:09:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.526 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.526 { 00:17:25.526 "cntlid": 71, 00:17:25.526 "qid": 0, 00:17:25.526 "state": "enabled", 00:17:25.526 "thread": "nvmf_tgt_poll_group_000", 00:17:25.526 "listen_address": { 00:17:25.526 "trtype": "RDMA", 00:17:25.527 "adrfam": "IPv4", 00:17:25.527 "traddr": "192.168.100.8", 00:17:25.527 "trsvcid": "4420" 00:17:25.527 }, 00:17:25.527 "peer_address": { 00:17:25.527 "trtype": "RDMA", 00:17:25.527 "adrfam": "IPv4", 00:17:25.527 "traddr": "192.168.100.8", 00:17:25.527 "trsvcid": "40782" 00:17:25.527 }, 00:17:25.527 "auth": { 00:17:25.527 "state": "completed", 00:17:25.527 "digest": "sha384", 00:17:25.527 "dhgroup": "ffdhe3072" 00:17:25.527 } 00:17:25.527 } 00:17:25.527 ]' 00:17:25.527 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.527 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.527 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.785 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.785 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.785 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.785 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.785 18:09:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.785 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:26.353 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.611 18:09:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.870 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.128 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.128 18:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.387 { 00:17:27.387 "cntlid": 73, 00:17:27.387 "qid": 0, 00:17:27.387 "state": "enabled", 00:17:27.387 "thread": "nvmf_tgt_poll_group_000", 00:17:27.387 "listen_address": { 00:17:27.387 "trtype": "RDMA", 00:17:27.387 "adrfam": "IPv4", 00:17:27.387 "traddr": "192.168.100.8", 00:17:27.387 "trsvcid": "4420" 00:17:27.387 }, 00:17:27.387 "peer_address": { 00:17:27.387 "trtype": "RDMA", 00:17:27.387 "adrfam": "IPv4", 00:17:27.387 "traddr": "192.168.100.8", 00:17:27.387 "trsvcid": "46475" 00:17:27.387 }, 00:17:27.387 "auth": { 00:17:27.387 "state": "completed", 00:17:27.387 "digest": "sha384", 00:17:27.387 "dhgroup": "ffdhe4096" 00:17:27.387 } 00:17:27.387 } 00:17:27.387 ]' 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.387 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.645 18:09:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.212 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.469 18:09:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.726 00:17:28.726 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.726 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.726 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.985 { 00:17:28.985 "cntlid": 75, 00:17:28.985 "qid": 0, 00:17:28.985 "state": "enabled", 00:17:28.985 "thread": "nvmf_tgt_poll_group_000", 00:17:28.985 "listen_address": { 00:17:28.985 "trtype": "RDMA", 00:17:28.985 "adrfam": "IPv4", 00:17:28.985 "traddr": "192.168.100.8", 00:17:28.985 "trsvcid": "4420" 00:17:28.985 }, 00:17:28.985 "peer_address": { 00:17:28.985 "trtype": "RDMA", 00:17:28.985 "adrfam": "IPv4", 00:17:28.985 "traddr": "192.168.100.8", 00:17:28.985 "trsvcid": "46819" 00:17:28.985 }, 00:17:28.985 "auth": { 00:17:28.985 "state": "completed", 00:17:28.985 "digest": "sha384", 00:17:28.985 "dhgroup": "ffdhe4096" 00:17:28.985 } 00:17:28.985 } 00:17:28.985 ]' 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.985 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.244 18:09:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:29.810 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.069 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.328 00:17:30.328 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.328 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.328 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.587 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.587 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.587 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.587 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.587 18:09:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.587 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.587 { 00:17:30.587 "cntlid": 77, 00:17:30.587 "qid": 0, 00:17:30.587 "state": "enabled", 00:17:30.587 "thread": "nvmf_tgt_poll_group_000", 00:17:30.587 "listen_address": { 00:17:30.587 "trtype": "RDMA", 00:17:30.587 "adrfam": "IPv4", 00:17:30.587 "traddr": "192.168.100.8", 00:17:30.587 "trsvcid": "4420" 00:17:30.587 }, 00:17:30.587 "peer_address": { 00:17:30.587 "trtype": "RDMA", 00:17:30.587 "adrfam": "IPv4", 00:17:30.587 "traddr": "192.168.100.8", 00:17:30.587 "trsvcid": "44382" 00:17:30.587 }, 00:17:30.587 "auth": { 00:17:30.587 "state": "completed", 00:17:30.587 "digest": "sha384", 00:17:30.588 "dhgroup": "ffdhe4096" 00:17:30.588 } 00:17:30.588 } 00:17:30.588 ]' 00:17:30.588 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.588 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.588 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.588 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.588 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.847 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.847 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.847 18:09:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.847 18:09:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:31.416 18:09:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.675 18:09:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.934 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.192 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.192 { 00:17:32.192 "cntlid": 79, 00:17:32.192 "qid": 0, 00:17:32.192 "state": "enabled", 00:17:32.192 "thread": "nvmf_tgt_poll_group_000", 00:17:32.192 "listen_address": { 00:17:32.192 "trtype": "RDMA", 00:17:32.192 "adrfam": "IPv4", 00:17:32.192 "traddr": "192.168.100.8", 00:17:32.192 "trsvcid": "4420" 00:17:32.192 }, 00:17:32.192 "peer_address": { 00:17:32.192 "trtype": "RDMA", 00:17:32.192 "adrfam": "IPv4", 00:17:32.192 "traddr": "192.168.100.8", 00:17:32.192 "trsvcid": "51850" 00:17:32.192 }, 00:17:32.192 "auth": { 00:17:32.192 "state": "completed", 00:17:32.192 "digest": "sha384", 00:17:32.192 "dhgroup": "ffdhe4096" 00:17:32.192 } 00:17:32.192 } 00:17:32.192 ]' 00:17:32.192 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.451 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.710 18:09:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.278 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.538 18:09:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.798 00:17:33.798 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.798 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.798 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.057 { 00:17:34.057 "cntlid": 81, 00:17:34.057 "qid": 0, 00:17:34.057 "state": "enabled", 00:17:34.057 "thread": "nvmf_tgt_poll_group_000", 00:17:34.057 "listen_address": { 00:17:34.057 "trtype": "RDMA", 00:17:34.057 "adrfam": "IPv4", 00:17:34.057 "traddr": "192.168.100.8", 00:17:34.057 "trsvcid": "4420" 00:17:34.057 }, 00:17:34.057 "peer_address": { 00:17:34.057 "trtype": "RDMA", 00:17:34.057 "adrfam": "IPv4", 00:17:34.057 "traddr": "192.168.100.8", 00:17:34.057 "trsvcid": "51870" 00:17:34.057 }, 00:17:34.057 "auth": { 00:17:34.057 "state": "completed", 00:17:34.057 "digest": "sha384", 00:17:34.057 "dhgroup": "ffdhe6144" 00:17:34.057 } 00:17:34.057 } 00:17:34.057 ]' 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.057 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.316 18:09:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.883 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.173 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:35.173 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.173 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.173 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.173 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.173 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.174 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.174 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.174 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.174 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.174 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.174 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.433 00:17:35.433 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.433 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.433 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.690 { 00:17:35.690 "cntlid": 83, 00:17:35.690 "qid": 0, 00:17:35.690 "state": "enabled", 00:17:35.690 "thread": "nvmf_tgt_poll_group_000", 00:17:35.690 "listen_address": { 00:17:35.690 "trtype": "RDMA", 00:17:35.690 "adrfam": "IPv4", 00:17:35.690 "traddr": "192.168.100.8", 00:17:35.690 "trsvcid": "4420" 00:17:35.690 }, 00:17:35.690 "peer_address": { 00:17:35.690 "trtype": "RDMA", 00:17:35.690 "adrfam": "IPv4", 00:17:35.690 "traddr": "192.168.100.8", 00:17:35.690 "trsvcid": "43878" 00:17:35.690 }, 00:17:35.690 "auth": { 00:17:35.690 "state": "completed", 00:17:35.690 "digest": "sha384", 00:17:35.690 "dhgroup": "ffdhe6144" 00:17:35.690 } 00:17:35.690 } 00:17:35.690 ]' 00:17:35.690 18:09:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.690 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.948 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:36.515 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.774 18:09:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:36.774 18:09:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.774 18:09:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.774 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.774 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.774 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:36.774 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.033 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.291 00:17:37.291 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.291 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.291 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.572 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.573 { 00:17:37.573 "cntlid": 85, 00:17:37.573 "qid": 0, 00:17:37.573 "state": "enabled", 00:17:37.573 "thread": "nvmf_tgt_poll_group_000", 00:17:37.573 "listen_address": { 00:17:37.573 "trtype": "RDMA", 00:17:37.573 "adrfam": "IPv4", 00:17:37.573 "traddr": "192.168.100.8", 00:17:37.573 "trsvcid": "4420" 00:17:37.573 }, 00:17:37.573 "peer_address": { 00:17:37.573 "trtype": "RDMA", 00:17:37.573 "adrfam": "IPv4", 00:17:37.573 "traddr": "192.168.100.8", 00:17:37.573 "trsvcid": "45372" 00:17:37.573 }, 00:17:37.573 "auth": { 00:17:37.573 "state": "completed", 00:17:37.573 "digest": "sha384", 00:17:37.573 "dhgroup": "ffdhe6144" 00:17:37.573 } 00:17:37.573 } 00:17:37.573 ]' 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.573 18:09:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.837 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.421 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.680 18:09:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.939 00:17:38.939 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.939 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.939 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.198 { 00:17:39.198 "cntlid": 87, 00:17:39.198 "qid": 0, 00:17:39.198 "state": "enabled", 00:17:39.198 "thread": "nvmf_tgt_poll_group_000", 00:17:39.198 "listen_address": { 00:17:39.198 "trtype": "RDMA", 00:17:39.198 "adrfam": "IPv4", 00:17:39.198 "traddr": "192.168.100.8", 00:17:39.198 "trsvcid": "4420" 00:17:39.198 }, 00:17:39.198 "peer_address": { 00:17:39.198 "trtype": "RDMA", 00:17:39.198 "adrfam": "IPv4", 00:17:39.198 "traddr": "192.168.100.8", 00:17:39.198 "trsvcid": "58139" 00:17:39.198 }, 00:17:39.198 "auth": { 00:17:39.198 "state": "completed", 00:17:39.198 "digest": "sha384", 00:17:39.198 "dhgroup": "ffdhe6144" 00:17:39.198 } 00:17:39.198 } 00:17:39.198 ]' 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.198 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.456 18:09:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:40.024 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.282 18:09:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.283 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.283 18:09:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.849 00:17:40.849 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.849 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.849 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.106 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.106 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.106 18:09:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.106 18:09:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.106 18:09:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.106 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.106 { 00:17:41.106 "cntlid": 89, 00:17:41.106 "qid": 0, 00:17:41.106 "state": "enabled", 00:17:41.106 "thread": "nvmf_tgt_poll_group_000", 00:17:41.106 "listen_address": { 00:17:41.106 "trtype": "RDMA", 00:17:41.106 "adrfam": "IPv4", 00:17:41.106 "traddr": "192.168.100.8", 00:17:41.106 "trsvcid": "4420" 00:17:41.106 }, 00:17:41.106 "peer_address": { 00:17:41.106 "trtype": "RDMA", 00:17:41.106 "adrfam": "IPv4", 00:17:41.106 "traddr": "192.168.100.8", 00:17:41.106 "trsvcid": "54204" 00:17:41.106 }, 00:17:41.107 "auth": { 00:17:41.107 "state": "completed", 00:17:41.107 "digest": "sha384", 00:17:41.107 "dhgroup": "ffdhe8192" 00:17:41.107 } 00:17:41.107 } 00:17:41.107 ]' 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.107 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.364 18:09:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.931 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.190 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.757 00:17:42.757 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.757 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.757 18:09:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.016 { 00:17:43.016 "cntlid": 91, 00:17:43.016 "qid": 0, 00:17:43.016 "state": "enabled", 00:17:43.016 "thread": "nvmf_tgt_poll_group_000", 00:17:43.016 "listen_address": { 00:17:43.016 "trtype": "RDMA", 00:17:43.016 "adrfam": "IPv4", 00:17:43.016 "traddr": "192.168.100.8", 00:17:43.016 "trsvcid": "4420" 00:17:43.016 }, 00:17:43.016 "peer_address": { 00:17:43.016 "trtype": "RDMA", 00:17:43.016 "adrfam": "IPv4", 00:17:43.016 "traddr": "192.168.100.8", 00:17:43.016 "trsvcid": "56739" 00:17:43.016 }, 00:17:43.016 "auth": { 00:17:43.016 "state": "completed", 00:17:43.016 "digest": "sha384", 00:17:43.016 "dhgroup": "ffdhe8192" 00:17:43.016 } 00:17:43.016 } 00:17:43.016 ]' 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.016 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.275 18:09:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:43.841 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.100 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.667 00:17:44.667 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.667 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.667 18:09:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.925 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.925 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.925 18:09:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.925 18:09:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.925 18:09:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.925 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.925 { 00:17:44.925 "cntlid": 93, 00:17:44.925 "qid": 0, 00:17:44.925 "state": "enabled", 00:17:44.925 "thread": "nvmf_tgt_poll_group_000", 00:17:44.925 "listen_address": { 00:17:44.925 "trtype": "RDMA", 00:17:44.925 "adrfam": "IPv4", 00:17:44.925 "traddr": "192.168.100.8", 00:17:44.925 "trsvcid": "4420" 00:17:44.925 }, 00:17:44.925 "peer_address": { 00:17:44.925 "trtype": "RDMA", 00:17:44.925 "adrfam": "IPv4", 00:17:44.925 "traddr": "192.168.100.8", 00:17:44.925 "trsvcid": "43287" 00:17:44.925 }, 00:17:44.925 "auth": { 00:17:44.925 "state": "completed", 00:17:44.926 "digest": "sha384", 00:17:44.926 "dhgroup": "ffdhe8192" 00:17:44.926 } 00:17:44.926 } 00:17:44.926 ]' 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.926 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.184 18:09:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:45.749 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.008 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.576 00:17:46.576 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.576 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.576 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.833 18:09:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.833 { 00:17:46.833 "cntlid": 95, 00:17:46.833 "qid": 0, 00:17:46.833 "state": "enabled", 00:17:46.833 "thread": "nvmf_tgt_poll_group_000", 00:17:46.833 "listen_address": { 00:17:46.833 "trtype": "RDMA", 00:17:46.833 "adrfam": "IPv4", 00:17:46.833 "traddr": "192.168.100.8", 00:17:46.833 "trsvcid": "4420" 00:17:46.833 }, 00:17:46.833 "peer_address": { 00:17:46.833 "trtype": "RDMA", 00:17:46.833 "adrfam": "IPv4", 00:17:46.833 "traddr": "192.168.100.8", 00:17:46.833 "trsvcid": "37260" 00:17:46.833 }, 00:17:46.833 "auth": { 00:17:46.833 "state": "completed", 00:17:46.833 "digest": "sha384", 00:17:46.833 "dhgroup": "ffdhe8192" 00:17:46.833 } 00:17:46.833 } 00:17:46.833 ]' 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.833 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.091 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:47.657 18:09:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.657 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.915 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.174 00:17:48.174 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.174 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.174 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.437 { 00:17:48.437 "cntlid": 97, 00:17:48.437 "qid": 0, 00:17:48.437 "state": "enabled", 00:17:48.437 "thread": "nvmf_tgt_poll_group_000", 00:17:48.437 "listen_address": { 00:17:48.437 "trtype": "RDMA", 00:17:48.437 "adrfam": "IPv4", 00:17:48.437 "traddr": "192.168.100.8", 00:17:48.437 "trsvcid": "4420" 00:17:48.437 }, 00:17:48.437 "peer_address": { 00:17:48.437 "trtype": "RDMA", 00:17:48.437 "adrfam": "IPv4", 00:17:48.437 "traddr": "192.168.100.8", 00:17:48.437 "trsvcid": "39175" 00:17:48.437 }, 00:17:48.437 "auth": { 00:17:48.437 "state": "completed", 00:17:48.437 "digest": "sha512", 00:17:48.437 "dhgroup": "null" 00:17:48.437 } 00:17:48.437 } 00:17:48.437 ]' 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.437 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.753 18:09:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.321 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.580 18:09:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.838 00:17:49.838 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.838 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.838 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.097 { 00:17:50.097 "cntlid": 99, 00:17:50.097 "qid": 0, 00:17:50.097 "state": "enabled", 00:17:50.097 "thread": "nvmf_tgt_poll_group_000", 00:17:50.097 "listen_address": { 00:17:50.097 "trtype": "RDMA", 00:17:50.097 "adrfam": "IPv4", 00:17:50.097 "traddr": "192.168.100.8", 00:17:50.097 "trsvcid": "4420" 00:17:50.097 }, 00:17:50.097 "peer_address": { 00:17:50.097 "trtype": "RDMA", 00:17:50.097 "adrfam": "IPv4", 00:17:50.097 "traddr": "192.168.100.8", 00:17:50.097 "trsvcid": "58545" 00:17:50.097 }, 00:17:50.097 "auth": { 00:17:50.097 "state": "completed", 00:17:50.097 "digest": "sha512", 00:17:50.097 "dhgroup": "null" 00:17:50.097 } 00:17:50.097 } 00:17:50.097 ]' 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.097 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.356 18:09:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:50.923 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.182 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.440 00:17:51.440 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.440 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.440 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.699 { 00:17:51.699 "cntlid": 101, 00:17:51.699 "qid": 0, 00:17:51.699 "state": "enabled", 00:17:51.699 "thread": "nvmf_tgt_poll_group_000", 00:17:51.699 "listen_address": { 00:17:51.699 "trtype": "RDMA", 00:17:51.699 "adrfam": "IPv4", 00:17:51.699 "traddr": "192.168.100.8", 00:17:51.699 "trsvcid": "4420" 00:17:51.699 }, 00:17:51.699 "peer_address": { 00:17:51.699 "trtype": "RDMA", 00:17:51.699 "adrfam": "IPv4", 00:17:51.699 "traddr": "192.168.100.8", 00:17:51.699 "trsvcid": "35741" 00:17:51.699 }, 00:17:51.699 "auth": { 00:17:51.699 "state": "completed", 00:17:51.699 "digest": "sha512", 00:17:51.699 "dhgroup": "null" 00:17:51.699 } 00:17:51.699 } 00:17:51.699 ]' 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.699 18:09:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.699 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:51.699 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.699 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.699 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.699 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.958 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:52.525 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.784 18:09:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.784 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.043 00:17:53.043 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.043 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.043 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.301 { 00:17:53.301 "cntlid": 103, 00:17:53.301 "qid": 0, 00:17:53.301 "state": "enabled", 00:17:53.301 "thread": "nvmf_tgt_poll_group_000", 00:17:53.301 "listen_address": { 00:17:53.301 "trtype": "RDMA", 00:17:53.301 "adrfam": "IPv4", 00:17:53.301 "traddr": "192.168.100.8", 00:17:53.301 "trsvcid": "4420" 00:17:53.301 }, 00:17:53.301 "peer_address": { 00:17:53.301 "trtype": "RDMA", 00:17:53.301 "adrfam": "IPv4", 00:17:53.301 "traddr": "192.168.100.8", 00:17:53.301 "trsvcid": "58971" 00:17:53.301 }, 00:17:53.301 "auth": { 00:17:53.301 "state": "completed", 00:17:53.301 "digest": "sha512", 00:17:53.301 "dhgroup": "null" 00:17:53.301 } 00:17:53.301 } 00:17:53.301 ]' 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:53.301 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.560 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.560 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.560 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.560 18:09:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:17:54.127 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.385 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.386 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.386 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.386 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.386 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.386 18:09:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.386 18:09:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.644 18:09:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.644 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.644 18:09:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.644 00:17:54.644 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.644 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.644 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.902 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.902 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.902 18:09:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.902 18:09:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.902 18:09:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.902 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.902 { 00:17:54.902 "cntlid": 105, 00:17:54.903 "qid": 0, 00:17:54.903 "state": "enabled", 00:17:54.903 "thread": "nvmf_tgt_poll_group_000", 00:17:54.903 "listen_address": { 00:17:54.903 "trtype": "RDMA", 00:17:54.903 "adrfam": "IPv4", 00:17:54.903 "traddr": "192.168.100.8", 00:17:54.903 "trsvcid": "4420" 00:17:54.903 }, 00:17:54.903 "peer_address": { 00:17:54.903 "trtype": "RDMA", 00:17:54.903 "adrfam": "IPv4", 00:17:54.903 "traddr": "192.168.100.8", 00:17:54.903 "trsvcid": "48872" 00:17:54.903 }, 00:17:54.903 "auth": { 00:17:54.903 "state": "completed", 00:17:54.903 "digest": "sha512", 00:17:54.903 "dhgroup": "ffdhe2048" 00:17:54.903 } 00:17:54.903 } 00:17:54.903 ]' 00:17:54.903 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.903 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.903 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.903 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.903 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.161 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.161 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.161 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.161 18:09:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:17:55.729 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.987 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:55.987 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.987 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.987 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.988 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.988 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.988 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:56.246 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:56.246 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.246 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.246 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.247 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.247 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.505 { 00:17:56.505 "cntlid": 107, 00:17:56.505 "qid": 0, 00:17:56.505 "state": "enabled", 00:17:56.505 "thread": "nvmf_tgt_poll_group_000", 00:17:56.505 "listen_address": { 00:17:56.505 "trtype": "RDMA", 00:17:56.505 "adrfam": "IPv4", 00:17:56.505 "traddr": "192.168.100.8", 00:17:56.505 "trsvcid": "4420" 00:17:56.505 }, 00:17:56.505 "peer_address": { 00:17:56.505 "trtype": "RDMA", 00:17:56.505 "adrfam": "IPv4", 00:17:56.505 "traddr": "192.168.100.8", 00:17:56.505 "trsvcid": "56818" 00:17:56.505 }, 00:17:56.505 "auth": { 00:17:56.505 "state": "completed", 00:17:56.505 "digest": "sha512", 00:17:56.505 "dhgroup": "ffdhe2048" 00:17:56.505 } 00:17:56.505 } 00:17:56.505 ]' 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.505 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.764 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.764 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.764 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.764 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.764 18:09:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.764 18:09:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.700 18:09:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.700 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.958 00:17:57.959 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.959 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.959 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.217 { 00:17:58.217 "cntlid": 109, 00:17:58.217 "qid": 0, 00:17:58.217 "state": "enabled", 00:17:58.217 "thread": "nvmf_tgt_poll_group_000", 00:17:58.217 "listen_address": { 00:17:58.217 "trtype": "RDMA", 00:17:58.217 "adrfam": "IPv4", 00:17:58.217 "traddr": "192.168.100.8", 00:17:58.217 "trsvcid": "4420" 00:17:58.217 }, 00:17:58.217 "peer_address": { 00:17:58.217 "trtype": "RDMA", 00:17:58.217 "adrfam": "IPv4", 00:17:58.217 "traddr": "192.168.100.8", 00:17:58.217 "trsvcid": "46283" 00:17:58.217 }, 00:17:58.217 "auth": { 00:17:58.217 "state": "completed", 00:17:58.217 "digest": "sha512", 00:17:58.217 "dhgroup": "ffdhe2048" 00:17:58.217 } 00:17:58.217 } 00:17:58.217 ]' 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.217 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.476 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.476 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.476 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.476 18:09:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:17:59.045 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.304 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.564 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.564 00:17:59.823 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.823 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.823 18:09:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.823 { 00:17:59.823 "cntlid": 111, 00:17:59.823 "qid": 0, 00:17:59.823 "state": "enabled", 00:17:59.823 "thread": "nvmf_tgt_poll_group_000", 00:17:59.823 "listen_address": { 00:17:59.823 "trtype": "RDMA", 00:17:59.823 "adrfam": "IPv4", 00:17:59.823 "traddr": "192.168.100.8", 00:17:59.823 "trsvcid": "4420" 00:17:59.823 }, 00:17:59.823 "peer_address": { 00:17:59.823 "trtype": "RDMA", 00:17:59.823 "adrfam": "IPv4", 00:17:59.823 "traddr": "192.168.100.8", 00:17:59.823 "trsvcid": "33067" 00:17:59.823 }, 00:17:59.823 "auth": { 00:17:59.823 "state": "completed", 00:17:59.823 "digest": "sha512", 00:17:59.823 "dhgroup": "ffdhe2048" 00:17:59.823 } 00:17:59.823 } 00:17:59.823 ]' 00:17:59.823 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.082 18:10:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.018 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.277 00:18:01.277 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.277 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.277 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.535 { 00:18:01.535 "cntlid": 113, 00:18:01.535 "qid": 0, 00:18:01.535 "state": "enabled", 00:18:01.535 "thread": "nvmf_tgt_poll_group_000", 00:18:01.535 "listen_address": { 00:18:01.535 "trtype": "RDMA", 00:18:01.535 "adrfam": "IPv4", 00:18:01.535 "traddr": "192.168.100.8", 00:18:01.535 "trsvcid": "4420" 00:18:01.535 }, 00:18:01.535 "peer_address": { 00:18:01.535 "trtype": "RDMA", 00:18:01.535 "adrfam": "IPv4", 00:18:01.535 "traddr": "192.168.100.8", 00:18:01.535 "trsvcid": "33638" 00:18:01.535 }, 00:18:01.535 "auth": { 00:18:01.535 "state": "completed", 00:18:01.535 "digest": "sha512", 00:18:01.535 "dhgroup": "ffdhe3072" 00:18:01.535 } 00:18:01.535 } 00:18:01.535 ]' 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.535 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.793 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.793 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.793 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.793 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.793 18:10:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.793 18:10:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.730 18:10:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.730 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.988 00:18:02.988 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.988 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.988 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.247 { 00:18:03.247 "cntlid": 115, 00:18:03.247 "qid": 0, 00:18:03.247 "state": "enabled", 00:18:03.247 "thread": "nvmf_tgt_poll_group_000", 00:18:03.247 "listen_address": { 00:18:03.247 "trtype": "RDMA", 00:18:03.247 "adrfam": "IPv4", 00:18:03.247 "traddr": "192.168.100.8", 00:18:03.247 "trsvcid": "4420" 00:18:03.247 }, 00:18:03.247 "peer_address": { 00:18:03.247 "trtype": "RDMA", 00:18:03.247 "adrfam": "IPv4", 00:18:03.247 "traddr": "192.168.100.8", 00:18:03.247 "trsvcid": "58422" 00:18:03.247 }, 00:18:03.247 "auth": { 00:18:03.247 "state": "completed", 00:18:03.247 "digest": "sha512", 00:18:03.247 "dhgroup": "ffdhe3072" 00:18:03.247 } 00:18:03.247 } 00:18:03.247 ]' 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.247 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.546 18:10:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.151 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.412 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:04.412 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.412 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.412 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.413 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.672 00:18:04.672 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.672 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.672 18:10:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.930 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.930 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.930 18:10:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.930 18:10:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.930 18:10:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.930 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.930 { 00:18:04.930 "cntlid": 117, 00:18:04.930 "qid": 0, 00:18:04.930 "state": "enabled", 00:18:04.930 "thread": "nvmf_tgt_poll_group_000", 00:18:04.930 "listen_address": { 00:18:04.930 "trtype": "RDMA", 00:18:04.930 "adrfam": "IPv4", 00:18:04.930 "traddr": "192.168.100.8", 00:18:04.930 "trsvcid": "4420" 00:18:04.930 }, 00:18:04.930 "peer_address": { 00:18:04.930 "trtype": "RDMA", 00:18:04.930 "adrfam": "IPv4", 00:18:04.930 "traddr": "192.168.100.8", 00:18:04.930 "trsvcid": "54999" 00:18:04.930 }, 00:18:04.931 "auth": { 00:18:04.931 "state": "completed", 00:18:04.931 "digest": "sha512", 00:18:04.931 "dhgroup": "ffdhe3072" 00:18:04.931 } 00:18:04.931 } 00:18:04.931 ]' 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.931 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.189 18:10:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:18:05.755 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.014 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.015 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.273 00:18:06.273 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.273 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.273 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.531 { 00:18:06.531 "cntlid": 119, 00:18:06.531 "qid": 0, 00:18:06.531 "state": "enabled", 00:18:06.531 "thread": "nvmf_tgt_poll_group_000", 00:18:06.531 "listen_address": { 00:18:06.531 "trtype": "RDMA", 00:18:06.531 "adrfam": "IPv4", 00:18:06.531 "traddr": "192.168.100.8", 00:18:06.531 "trsvcid": "4420" 00:18:06.531 }, 00:18:06.531 "peer_address": { 00:18:06.531 "trtype": "RDMA", 00:18:06.531 "adrfam": "IPv4", 00:18:06.531 "traddr": "192.168.100.8", 00:18:06.531 "trsvcid": "59201" 00:18:06.531 }, 00:18:06.531 "auth": { 00:18:06.531 "state": "completed", 00:18:06.531 "digest": "sha512", 00:18:06.531 "dhgroup": "ffdhe3072" 00:18:06.531 } 00:18:06.531 } 00:18:06.531 ]' 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.531 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.789 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.789 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.789 18:10:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.789 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:18:07.355 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.613 18:10:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.871 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.128 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.128 { 00:18:08.128 "cntlid": 121, 00:18:08.128 "qid": 0, 00:18:08.128 "state": "enabled", 00:18:08.128 "thread": "nvmf_tgt_poll_group_000", 00:18:08.128 "listen_address": { 00:18:08.128 "trtype": "RDMA", 00:18:08.128 "adrfam": "IPv4", 00:18:08.128 "traddr": "192.168.100.8", 00:18:08.128 "trsvcid": "4420" 00:18:08.128 }, 00:18:08.128 "peer_address": { 00:18:08.128 "trtype": "RDMA", 00:18:08.128 "adrfam": "IPv4", 00:18:08.128 "traddr": "192.168.100.8", 00:18:08.128 "trsvcid": "54271" 00:18:08.128 }, 00:18:08.128 "auth": { 00:18:08.128 "state": "completed", 00:18:08.128 "digest": "sha512", 00:18:08.128 "dhgroup": "ffdhe4096" 00:18:08.128 } 00:18:08.128 } 00:18:08.128 ]' 00:18:08.128 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.387 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.645 18:10:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.224 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.484 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.743 00:18:09.743 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.743 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.743 18:10:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.001 { 00:18:10.001 "cntlid": 123, 00:18:10.001 "qid": 0, 00:18:10.001 "state": "enabled", 00:18:10.001 "thread": "nvmf_tgt_poll_group_000", 00:18:10.001 "listen_address": { 00:18:10.001 "trtype": "RDMA", 00:18:10.001 "adrfam": "IPv4", 00:18:10.001 "traddr": "192.168.100.8", 00:18:10.001 "trsvcid": "4420" 00:18:10.001 }, 00:18:10.001 "peer_address": { 00:18:10.001 "trtype": "RDMA", 00:18:10.001 "adrfam": "IPv4", 00:18:10.001 "traddr": "192.168.100.8", 00:18:10.001 "trsvcid": "54269" 00:18:10.001 }, 00:18:10.001 "auth": { 00:18:10.001 "state": "completed", 00:18:10.001 "digest": "sha512", 00:18:10.001 "dhgroup": "ffdhe4096" 00:18:10.001 } 00:18:10.001 } 00:18:10.001 ]' 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.001 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.002 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.002 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.002 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.260 18:10:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.827 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.086 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.345 00:18:11.345 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.345 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.345 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.604 { 00:18:11.604 "cntlid": 125, 00:18:11.604 "qid": 0, 00:18:11.604 "state": "enabled", 00:18:11.604 "thread": "nvmf_tgt_poll_group_000", 00:18:11.604 "listen_address": { 00:18:11.604 "trtype": "RDMA", 00:18:11.604 "adrfam": "IPv4", 00:18:11.604 "traddr": "192.168.100.8", 00:18:11.604 "trsvcid": "4420" 00:18:11.604 }, 00:18:11.604 "peer_address": { 00:18:11.604 "trtype": "RDMA", 00:18:11.604 "adrfam": "IPv4", 00:18:11.604 "traddr": "192.168.100.8", 00:18:11.604 "trsvcid": "40388" 00:18:11.604 }, 00:18:11.604 "auth": { 00:18:11.604 "state": "completed", 00:18:11.604 "digest": "sha512", 00:18:11.604 "dhgroup": "ffdhe4096" 00:18:11.604 } 00:18:11.604 } 00:18:11.604 ]' 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.604 18:10:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.863 18:10:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:18:12.431 18:10:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.690 18:10:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.690 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.949 00:18:12.949 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.949 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.949 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.208 { 00:18:13.208 "cntlid": 127, 00:18:13.208 "qid": 0, 00:18:13.208 "state": "enabled", 00:18:13.208 "thread": "nvmf_tgt_poll_group_000", 00:18:13.208 "listen_address": { 00:18:13.208 "trtype": "RDMA", 00:18:13.208 "adrfam": "IPv4", 00:18:13.208 "traddr": "192.168.100.8", 00:18:13.208 "trsvcid": "4420" 00:18:13.208 }, 00:18:13.208 "peer_address": { 00:18:13.208 "trtype": "RDMA", 00:18:13.208 "adrfam": "IPv4", 00:18:13.208 "traddr": "192.168.100.8", 00:18:13.208 "trsvcid": "40072" 00:18:13.208 }, 00:18:13.208 "auth": { 00:18:13.208 "state": "completed", 00:18:13.208 "digest": "sha512", 00:18:13.208 "dhgroup": "ffdhe4096" 00:18:13.208 } 00:18:13.208 } 00:18:13.208 ]' 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.208 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.467 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.467 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.467 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.467 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.467 18:10:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:18:14.035 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.292 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.552 18:10:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.811 00:18:14.811 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.811 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.811 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.069 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.069 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.069 18:10:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.069 18:10:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.069 18:10:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.069 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.069 { 00:18:15.069 "cntlid": 129, 00:18:15.069 "qid": 0, 00:18:15.069 "state": "enabled", 00:18:15.069 "thread": "nvmf_tgt_poll_group_000", 00:18:15.069 "listen_address": { 00:18:15.069 "trtype": "RDMA", 00:18:15.069 "adrfam": "IPv4", 00:18:15.069 "traddr": "192.168.100.8", 00:18:15.069 "trsvcid": "4420" 00:18:15.069 }, 00:18:15.069 "peer_address": { 00:18:15.069 "trtype": "RDMA", 00:18:15.069 "adrfam": "IPv4", 00:18:15.069 "traddr": "192.168.100.8", 00:18:15.069 "trsvcid": "34479" 00:18:15.069 }, 00:18:15.069 "auth": { 00:18:15.069 "state": "completed", 00:18:15.069 "digest": "sha512", 00:18:15.069 "dhgroup": "ffdhe6144" 00:18:15.069 } 00:18:15.069 } 00:18:15.069 ]' 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.070 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.328 18:10:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.896 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.154 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.155 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.414 00:18:16.414 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.414 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.414 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.672 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.672 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.672 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.672 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 18:10:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.672 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.672 { 00:18:16.672 "cntlid": 131, 00:18:16.672 "qid": 0, 00:18:16.672 "state": "enabled", 00:18:16.672 "thread": "nvmf_tgt_poll_group_000", 00:18:16.672 "listen_address": { 00:18:16.672 "trtype": "RDMA", 00:18:16.672 "adrfam": "IPv4", 00:18:16.672 "traddr": "192.168.100.8", 00:18:16.672 "trsvcid": "4420" 00:18:16.672 }, 00:18:16.672 "peer_address": { 00:18:16.672 "trtype": "RDMA", 00:18:16.672 "adrfam": "IPv4", 00:18:16.672 "traddr": "192.168.100.8", 00:18:16.672 "trsvcid": "60116" 00:18:16.672 }, 00:18:16.673 "auth": { 00:18:16.673 "state": "completed", 00:18:16.673 "digest": "sha512", 00:18:16.673 "dhgroup": "ffdhe6144" 00:18:16.673 } 00:18:16.673 } 00:18:16.673 ]' 00:18:16.673 18:10:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.673 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.932 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:18:17.499 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.811 18:10:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.811 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.812 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.071 00:18:18.071 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.071 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.071 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.330 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.330 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.330 18:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.330 18:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.330 18:10:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.330 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.330 { 00:18:18.330 "cntlid": 133, 00:18:18.330 "qid": 0, 00:18:18.330 "state": "enabled", 00:18:18.330 "thread": "nvmf_tgt_poll_group_000", 00:18:18.330 "listen_address": { 00:18:18.330 "trtype": "RDMA", 00:18:18.330 "adrfam": "IPv4", 00:18:18.330 "traddr": "192.168.100.8", 00:18:18.330 "trsvcid": "4420" 00:18:18.330 }, 00:18:18.330 "peer_address": { 00:18:18.330 "trtype": "RDMA", 00:18:18.330 "adrfam": "IPv4", 00:18:18.330 "traddr": "192.168.100.8", 00:18:18.330 "trsvcid": "48740" 00:18:18.330 }, 00:18:18.330 "auth": { 00:18:18.330 "state": "completed", 00:18:18.331 "digest": "sha512", 00:18:18.331 "dhgroup": "ffdhe6144" 00:18:18.331 } 00:18:18.331 } 00:18:18.331 ]' 00:18:18.331 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.331 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.331 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.331 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.331 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.590 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.590 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.590 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.590 18:10:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:18:19.158 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.417 18:10:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.985 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.985 { 00:18:19.985 "cntlid": 135, 00:18:19.985 "qid": 0, 00:18:19.985 "state": "enabled", 00:18:19.985 "thread": "nvmf_tgt_poll_group_000", 00:18:19.985 "listen_address": { 00:18:19.985 "trtype": "RDMA", 00:18:19.985 "adrfam": "IPv4", 00:18:19.985 "traddr": "192.168.100.8", 00:18:19.985 "trsvcid": "4420" 00:18:19.985 }, 00:18:19.985 "peer_address": { 00:18:19.985 "trtype": "RDMA", 00:18:19.985 "adrfam": "IPv4", 00:18:19.985 "traddr": "192.168.100.8", 00:18:19.985 "trsvcid": "41768" 00:18:19.985 }, 00:18:19.985 "auth": { 00:18:19.985 "state": "completed", 00:18:19.985 "digest": "sha512", 00:18:19.985 "dhgroup": "ffdhe6144" 00:18:19.985 } 00:18:19.985 } 00:18:19.985 ]' 00:18:19.985 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.243 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.501 18:10:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.066 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.325 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.583 00:18:21.841 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.841 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.841 18:10:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.841 { 00:18:21.841 "cntlid": 137, 00:18:21.841 "qid": 0, 00:18:21.841 "state": "enabled", 00:18:21.841 "thread": "nvmf_tgt_poll_group_000", 00:18:21.841 "listen_address": { 00:18:21.841 "trtype": "RDMA", 00:18:21.841 "adrfam": "IPv4", 00:18:21.841 "traddr": "192.168.100.8", 00:18:21.841 "trsvcid": "4420" 00:18:21.841 }, 00:18:21.841 "peer_address": { 00:18:21.841 "trtype": "RDMA", 00:18:21.841 "adrfam": "IPv4", 00:18:21.841 "traddr": "192.168.100.8", 00:18:21.841 "trsvcid": "39978" 00:18:21.841 }, 00:18:21.841 "auth": { 00:18:21.841 "state": "completed", 00:18:21.841 "digest": "sha512", 00:18:21.841 "dhgroup": "ffdhe8192" 00:18:21.841 } 00:18:21.841 } 00:18:21.841 ]' 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.841 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.099 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.099 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.099 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.099 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.099 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.099 18:10:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.032 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.621 00:18:23.621 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.621 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.621 18:10:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.621 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.879 { 00:18:23.879 "cntlid": 139, 00:18:23.879 "qid": 0, 00:18:23.879 "state": "enabled", 00:18:23.879 "thread": "nvmf_tgt_poll_group_000", 00:18:23.879 "listen_address": { 00:18:23.879 "trtype": "RDMA", 00:18:23.879 "adrfam": "IPv4", 00:18:23.879 "traddr": "192.168.100.8", 00:18:23.879 "trsvcid": "4420" 00:18:23.879 }, 00:18:23.879 "peer_address": { 00:18:23.879 "trtype": "RDMA", 00:18:23.879 "adrfam": "IPv4", 00:18:23.879 "traddr": "192.168.100.8", 00:18:23.879 "trsvcid": "35350" 00:18:23.879 }, 00:18:23.879 "auth": { 00:18:23.879 "state": "completed", 00:18:23.879 "digest": "sha512", 00:18:23.879 "dhgroup": "ffdhe8192" 00:18:23.879 } 00:18:23.879 } 00:18:23.879 ]' 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.879 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.880 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.880 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.880 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.880 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.880 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.138 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:01:YzA0Zjg3NDgxMjlmMTc2NjA4OGU5MmFiMGIyNTJjYTUDWcdE: --dhchap-ctrl-secret DHHC-1:02:MjJkNmRiOTY1MDlkNzM1N2JlOTEyNzI5NzE2NjBjNjFkNTMxMThkYjBiYjAyZDhml62Azw==: 00:18:24.705 18:10:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.705 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.963 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.530 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.530 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.530 { 00:18:25.531 "cntlid": 141, 00:18:25.531 "qid": 0, 00:18:25.531 "state": "enabled", 00:18:25.531 "thread": "nvmf_tgt_poll_group_000", 00:18:25.531 "listen_address": { 00:18:25.531 "trtype": "RDMA", 00:18:25.531 "adrfam": "IPv4", 00:18:25.531 "traddr": "192.168.100.8", 00:18:25.531 "trsvcid": "4420" 00:18:25.531 }, 00:18:25.531 "peer_address": { 00:18:25.531 "trtype": "RDMA", 00:18:25.531 "adrfam": "IPv4", 00:18:25.531 "traddr": "192.168.100.8", 00:18:25.531 "trsvcid": "46053" 00:18:25.531 }, 00:18:25.531 "auth": { 00:18:25.531 "state": "completed", 00:18:25.531 "digest": "sha512", 00:18:25.531 "dhgroup": "ffdhe8192" 00:18:25.531 } 00:18:25.531 } 00:18:25.531 ]' 00:18:25.531 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.789 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.789 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.789 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.789 18:10:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.789 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.789 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.789 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.048 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NDIxOWIxNjcyNThhYzU0MTZhYTMxMGU4Njg2Mjk3NTkxYzE0MTE1NDgxMGQwNjBiIDiaPQ==: --dhchap-ctrl-secret DHHC-1:01:ODg2MzY0Y2I0ZjU4ODMwZWNkY2Q5ZTVhNjBlNzU3OTmqEDgv: 00:18:26.615 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.616 18:10:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.874 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.441 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.441 18:10:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.442 18:10:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.442 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.442 { 00:18:27.442 "cntlid": 143, 00:18:27.442 "qid": 0, 00:18:27.442 "state": "enabled", 00:18:27.442 "thread": "nvmf_tgt_poll_group_000", 00:18:27.442 "listen_address": { 00:18:27.442 "trtype": "RDMA", 00:18:27.442 "adrfam": "IPv4", 00:18:27.442 "traddr": "192.168.100.8", 00:18:27.442 "trsvcid": "4420" 00:18:27.442 }, 00:18:27.442 "peer_address": { 00:18:27.442 "trtype": "RDMA", 00:18:27.442 "adrfam": "IPv4", 00:18:27.442 "traddr": "192.168.100.8", 00:18:27.442 "trsvcid": "47113" 00:18:27.442 }, 00:18:27.442 "auth": { 00:18:27.442 "state": "completed", 00:18:27.442 "digest": "sha512", 00:18:27.442 "dhgroup": "ffdhe8192" 00:18:27.442 } 00:18:27.442 } 00:18:27.442 ]' 00:18:27.442 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.442 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.442 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.700 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.700 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.700 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.700 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.700 18:10:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.959 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.528 18:10:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.788 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.356 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.356 { 00:18:29.356 "cntlid": 145, 00:18:29.356 "qid": 0, 00:18:29.356 "state": "enabled", 00:18:29.356 "thread": "nvmf_tgt_poll_group_000", 00:18:29.356 "listen_address": { 00:18:29.356 "trtype": "RDMA", 00:18:29.356 "adrfam": "IPv4", 00:18:29.356 "traddr": "192.168.100.8", 00:18:29.356 "trsvcid": "4420" 00:18:29.356 }, 00:18:29.356 "peer_address": { 00:18:29.356 "trtype": "RDMA", 00:18:29.356 "adrfam": "IPv4", 00:18:29.356 "traddr": "192.168.100.8", 00:18:29.356 "trsvcid": "39079" 00:18:29.356 }, 00:18:29.356 "auth": { 00:18:29.356 "state": "completed", 00:18:29.356 "digest": "sha512", 00:18:29.356 "dhgroup": "ffdhe8192" 00:18:29.356 } 00:18:29.356 } 00:18:29.356 ]' 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.356 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.616 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.616 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.616 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.616 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.616 18:10:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:00:ZGEwMmVhNTQyOGU4ZGY0OWU2YmIwYmI5MjIxOTMxYjhlNWNhNjAyYmNmZGFmYjk2vvs0YA==: --dhchap-ctrl-secret DHHC-1:03:OWJlMWNiZmI1ZGI0YWJjY2Y5NTM1NWI0NmU5ZDA0NTE4MTRmMjU2MTg4OTY2ZTFhY2Y4NTFjMmJmOGEyY2JmNix56tg=: 00:18:30.185 18:10:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:30.445 18:10:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:02.606 request: 00:19:02.606 { 00:19:02.606 "name": "nvme0", 00:19:02.606 "trtype": "rdma", 00:19:02.606 "traddr": "192.168.100.8", 00:19:02.606 "adrfam": "ipv4", 00:19:02.606 "trsvcid": "4420", 00:19:02.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:02.606 "prchk_reftag": false, 00:19:02.606 "prchk_guard": false, 00:19:02.606 "hdgst": false, 00:19:02.606 "ddgst": false, 00:19:02.606 "dhchap_key": "key2", 00:19:02.606 "method": "bdev_nvme_attach_controller", 00:19:02.606 "req_id": 1 00:19:02.606 } 00:19:02.606 Got JSON-RPC error response 00:19:02.606 response: 00:19:02.606 { 00:19:02.606 "code": -5, 00:19:02.606 "message": "Input/output error" 00:19:02.606 } 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:02.606 request: 00:19:02.606 { 00:19:02.606 "name": "nvme0", 00:19:02.606 "trtype": "rdma", 00:19:02.606 "traddr": "192.168.100.8", 00:19:02.606 "adrfam": "ipv4", 00:19:02.606 "trsvcid": "4420", 00:19:02.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:02.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:02.606 "prchk_reftag": false, 00:19:02.606 "prchk_guard": false, 00:19:02.606 "hdgst": false, 00:19:02.606 "ddgst": false, 00:19:02.606 "dhchap_key": "key1", 00:19:02.606 "dhchap_ctrlr_key": "ckey2", 00:19:02.606 "method": "bdev_nvme_attach_controller", 00:19:02.606 "req_id": 1 00:19:02.606 } 00:19:02.606 Got JSON-RPC error response 00:19:02.606 response: 00:19:02.606 { 00:19:02.606 "code": -5, 00:19:02.606 "message": "Input/output error" 00:19:02.606 } 00:19:02.606 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.607 18:11:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.687 request: 00:19:34.687 { 00:19:34.687 "name": "nvme0", 00:19:34.687 "trtype": "rdma", 00:19:34.687 "traddr": "192.168.100.8", 00:19:34.687 "adrfam": "ipv4", 00:19:34.687 "trsvcid": "4420", 00:19:34.687 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:34.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:19:34.687 "prchk_reftag": false, 00:19:34.687 "prchk_guard": false, 00:19:34.687 "hdgst": false, 00:19:34.687 "ddgst": false, 00:19:34.687 "dhchap_key": "key1", 00:19:34.687 "dhchap_ctrlr_key": "ckey1", 00:19:34.687 "method": "bdev_nvme_attach_controller", 00:19:34.687 "req_id": 1 00:19:34.687 } 00:19:34.687 Got JSON-RPC error response 00:19:34.687 response: 00:19:34.687 { 00:19:34.687 "code": -5, 00:19:34.687 "message": "Input/output error" 00:19:34.687 } 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1650998 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1650998 ']' 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1650998 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1650998 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1650998' 00:19:34.687 killing process with pid 1650998 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1650998 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1650998 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1684520 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1684520 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1684520 ']' 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.687 18:11:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1684520 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1684520 ']' 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.687 18:11:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.687 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.687 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.687 { 00:19:34.687 "cntlid": 1, 00:19:34.687 "qid": 0, 00:19:34.687 "state": "enabled", 00:19:34.687 "thread": "nvmf_tgt_poll_group_000", 00:19:34.687 "listen_address": { 00:19:34.687 "trtype": "RDMA", 00:19:34.688 "adrfam": "IPv4", 00:19:34.688 "traddr": "192.168.100.8", 00:19:34.688 "trsvcid": "4420" 00:19:34.688 }, 00:19:34.688 "peer_address": { 00:19:34.688 "trtype": "RDMA", 00:19:34.688 "adrfam": "IPv4", 00:19:34.688 "traddr": "192.168.100.8", 00:19:34.688 "trsvcid": "36968" 00:19:34.688 }, 00:19:34.688 "auth": { 00:19:34.688 "state": "completed", 00:19:34.688 "digest": "sha512", 00:19:34.688 "dhgroup": "ffdhe8192" 00:19:34.688 } 00:19:34.688 } 00:19:34.688 ]' 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.688 18:11:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e --dhchap-secret DHHC-1:03:NjA2N2IzZjEwNTIyM2EyODljNDYzNjRjNmVjYjdiZmNkOGFlMmIxNmZkZGY5Y2RjYTk3MzZkMGIwNTRlNTFjNNzkYE4=: 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:35.259 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:35.518 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.518 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:35.518 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.519 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:35.519 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.519 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:35.519 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.519 18:11:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.519 18:11:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.591 request: 00:20:07.591 { 00:20:07.591 "name": "nvme0", 00:20:07.591 "trtype": "rdma", 00:20:07.591 "traddr": "192.168.100.8", 00:20:07.591 "adrfam": "ipv4", 00:20:07.591 "trsvcid": "4420", 00:20:07.591 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:07.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:07.591 "prchk_reftag": false, 00:20:07.591 "prchk_guard": false, 00:20:07.591 "hdgst": false, 00:20:07.591 "ddgst": false, 00:20:07.591 "dhchap_key": "key3", 00:20:07.591 "method": "bdev_nvme_attach_controller", 00:20:07.591 "req_id": 1 00:20:07.591 } 00:20:07.591 Got JSON-RPC error response 00:20:07.591 response: 00:20:07.591 { 00:20:07.591 "code": -5, 00:20:07.591 "message": "Input/output error" 00:20:07.591 } 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:07.591 18:12:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.591 18:12:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.698 request: 00:20:39.698 { 00:20:39.698 "name": "nvme0", 00:20:39.698 "trtype": "rdma", 00:20:39.698 "traddr": "192.168.100.8", 00:20:39.698 "adrfam": "ipv4", 00:20:39.698 "trsvcid": "4420", 00:20:39.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:39.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:39.699 "prchk_reftag": false, 00:20:39.699 "prchk_guard": false, 00:20:39.699 "hdgst": false, 00:20:39.699 "ddgst": false, 00:20:39.699 "dhchap_key": "key3", 00:20:39.699 "method": "bdev_nvme_attach_controller", 00:20:39.699 "req_id": 1 00:20:39.699 } 00:20:39.699 Got JSON-RPC error response 00:20:39.699 response: 00:20:39.699 { 00:20:39.699 "code": -5, 00:20:39.699 "message": "Input/output error" 00:20:39.699 } 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:39.699 request: 00:20:39.699 { 00:20:39.699 "name": "nvme0", 00:20:39.699 "trtype": "rdma", 00:20:39.699 "traddr": "192.168.100.8", 00:20:39.699 "adrfam": "ipv4", 00:20:39.699 "trsvcid": "4420", 00:20:39.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:39.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:39.699 "prchk_reftag": false, 00:20:39.699 "prchk_guard": false, 00:20:39.699 "hdgst": false, 00:20:39.699 "ddgst": false, 00:20:39.699 "dhchap_key": "key0", 00:20:39.699 "dhchap_ctrlr_key": "key1", 00:20:39.699 "method": "bdev_nvme_attach_controller", 00:20:39.699 "req_id": 1 00:20:39.699 } 00:20:39.699 Got JSON-RPC error response 00:20:39.699 response: 00:20:39.699 { 00:20:39.699 "code": -5, 00:20:39.699 "message": "Input/output error" 00:20:39.699 } 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:39.699 18:12:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:39.699 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1651032 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1651032 ']' 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1651032 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1651032 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1651032' 00:20:39.699 killing process with pid 1651032 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1651032 00:20:39.699 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1651032 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:39.700 rmmod nvme_rdma 00:20:39.700 rmmod nvme_fabrics 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1684520 ']' 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1684520 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1684520 ']' 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1684520 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1684520 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1684520' 00:20:39.700 killing process with pid 1684520 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1684520 00:20:39.700 18:12:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1684520 00:20:39.700 18:12:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:39.700 18:12:38 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:39.700 18:12:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XaX /tmp/spdk.key-sha256.TMP /tmp/spdk.key-sha384.RjY /tmp/spdk.key-sha512.zi8 /tmp/spdk.key-sha512.Z6M /tmp/spdk.key-sha384.xY8 /tmp/spdk.key-sha256.wpo '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:20:39.700 00:20:39.700 real 4m23.713s 00:20:39.700 user 9m22.635s 00:20:39.700 sys 0m24.499s 00:20:39.700 18:12:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:39.700 18:12:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.700 ************************************ 00:20:39.700 END TEST nvmf_auth_target 00:20:39.700 ************************************ 00:20:39.700 18:12:38 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:39.700 18:12:38 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:20:39.700 18:12:38 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:39.700 18:12:38 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:39.700 18:12:38 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:20:39.700 18:12:38 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:20:39.700 18:12:38 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:20:39.700 18:12:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:39.700 18:12:38 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.700 18:12:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:39.700 ************************************ 00:20:39.700 START TEST nvmf_srq_overwhelm 00:20:39.700 ************************************ 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:20:39.700 * Looking for test storage... 00:20:39.700 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:20:39.700 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:20:39.701 18:12:38 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.271 18:12:45 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:46.271 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:46.271 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:46.271 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:46.271 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.271 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:46.272 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.272 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:46.272 altname enp217s0f0np0 00:20:46.272 altname ens818f0np0 00:20:46.272 inet 192.168.100.8/24 scope global mlx_0_0 00:20:46.272 valid_lft forever preferred_lft forever 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:46.272 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:46.272 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:46.272 altname enp217s0f1np1 00:20:46.272 altname ens818f1np1 00:20:46.272 inet 192.168.100.9/24 scope global mlx_0_1 00:20:46.272 valid_lft forever preferred_lft forever 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:46.272 192.168.100.9' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:46.272 192.168.100.9' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:46.272 192.168.100.9' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=1700115 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 1700115 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@829 -- # '[' -z 1700115 ']' 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.272 18:12:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:46.272 [2024-07-15 18:12:46.298669] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:20:46.272 [2024-07-15 18:12:46.298726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.272 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.272 [2024-07-15 18:12:46.382129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.272 [2024-07-15 18:12:46.457184] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.272 [2024-07-15 18:12:46.457223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.272 [2024-07-15 18:12:46.457232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.272 [2024-07-15 18:12:46.457241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.272 [2024-07-15 18:12:46.457248] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.272 [2024-07-15 18:12:46.457294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.272 [2024-07-15 18:12:46.457313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.272 [2024-07-15 18:12:46.457618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.272 [2024-07-15 18:12:46.457620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # return 0 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:46.840 [2024-07-15 18:12:47.194672] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x225af80/0x225f470) succeed. 00:20:46.840 [2024-07-15 18:12:47.203888] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x225c5c0/0x22a0b00) succeed. 00:20:46.840 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:47.099 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.100 Malloc0 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:47.100 [2024-07-15 18:12:47.301912] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.100 18:12:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.047 Malloc1 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.047 18:12:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.995 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.996 Malloc2 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.996 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:49.254 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.254 18:12:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:50.191 Malloc3 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.191 18:12:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.128 Malloc4 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.128 18:12:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:20:52.066 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:20:52.066 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:52.066 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:52.324 Malloc5 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.324 18:12:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:20:53.259 18:12:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:20:53.259 [global] 00:20:53.259 thread=1 00:20:53.259 invalidate=1 00:20:53.259 rw=read 00:20:53.259 time_based=1 00:20:53.259 runtime=10 00:20:53.259 ioengine=libaio 00:20:53.259 direct=1 00:20:53.259 bs=1048576 00:20:53.259 iodepth=128 00:20:53.259 norandommap=1 00:20:53.259 numjobs=13 00:20:53.259 00:20:53.259 [job0] 00:20:53.259 filename=/dev/nvme0n1 00:20:53.259 [job1] 00:20:53.259 filename=/dev/nvme1n1 00:20:53.259 [job2] 00:20:53.259 filename=/dev/nvme2n1 00:20:53.259 [job3] 00:20:53.259 filename=/dev/nvme3n1 00:20:53.259 [job4] 00:20:53.259 filename=/dev/nvme4n1 00:20:53.259 [job5] 00:20:53.259 filename=/dev/nvme5n1 00:20:53.596 Could not set queue depth (nvme0n1) 00:20:53.596 Could not set queue depth (nvme1n1) 00:20:53.596 Could not set queue depth (nvme2n1) 00:20:53.596 Could not set queue depth (nvme3n1) 00:20:53.596 Could not set queue depth (nvme4n1) 00:20:53.596 Could not set queue depth (nvme5n1) 00:20:53.856 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:53.856 ... 00:20:53.856 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:53.856 ... 00:20:53.856 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:53.856 ... 00:20:53.856 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:53.856 ... 00:20:53.856 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:53.856 ... 00:20:53.856 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:20:53.856 ... 00:20:53.856 fio-3.35 00:20:53.856 Starting 78 threads 00:21:06.069 00:21:06.069 job0: (groupid=0, jobs=1): err= 0: pid=1701691: Mon Jul 15 18:13:04 2024 00:21:06.069 read: IOPS=4, BW=4459KiB/s (4566kB/s)(46.0MiB/10563msec) 00:21:06.069 slat (usec): min=913, max=2085.8k, avg=228555.24, stdev=630796.39 00:21:06.069 clat (msec): min=48, max=10557, avg=8347.34, stdev=3099.79 00:21:06.069 lat (msec): min=2094, max=10562, avg=8575.90, stdev=2851.99 00:21:06.069 clat percentiles (msec): 00:21:06.069 | 1.00th=[ 49], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 6409], 00:21:06.069 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10402], 60.00th=[10537], 00:21:06.069 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:06.069 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.069 | 99.99th=[10537] 00:21:06.069 lat (msec) : 50=2.17%, >=2000=97.83% 00:21:06.069 cpu : usr=0.00%, sys=0.43%, ctx=89, majf=0, minf=11777 00:21:06.069 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:21:06.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.069 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.069 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.069 job0: (groupid=0, jobs=1): err= 0: pid=1701692: Mon Jul 15 18:13:04 2024 00:21:06.069 read: IOPS=35, BW=35.6MiB/s (37.4MB/s)(358MiB/10048msec) 00:21:06.069 slat (usec): min=497, max=2079.4k, avg=27962.83, stdev=111340.72 00:21:06.069 clat (msec): min=34, max=5168, avg=2657.49, stdev=1252.12 00:21:06.069 lat (msec): min=93, max=5209, avg=2685.45, stdev=1256.85 00:21:06.069 clat percentiles (msec): 00:21:06.069 | 1.00th=[ 103], 5.00th=[ 397], 10.00th=[ 902], 20.00th=[ 1485], 00:21:06.069 | 30.00th=[ 2333], 40.00th=[ 2500], 50.00th=[ 2735], 60.00th=[ 2903], 00:21:06.069 | 70.00th=[ 2970], 80.00th=[ 4329], 90.00th=[ 4463], 95.00th=[ 4799], 00:21:06.069 | 99.00th=[ 5067], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:21:06.069 | 99.99th=[ 5201] 00:21:06.069 bw ( KiB/s): min=28672, max=65536, per=1.20%, avg=47335.56, stdev=12618.08, samples=9 00:21:06.069 iops : min= 28, max= 64, avg=46.11, stdev=12.41, samples=9 00:21:06.069 lat (msec) : 50=0.28%, 100=0.56%, 250=2.79%, 500=1.96%, 750=2.51% 00:21:06.069 lat (msec) : 1000=2.51%, 2000=16.48%, >=2000=72.91% 00:21:06.069 cpu : usr=0.04%, sys=0.93%, ctx=1181, majf=0, minf=32769 00:21:06.069 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=8.9%, >=64=82.4% 00:21:06.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.069 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.069 issued rwts: total=358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.069 job0: (groupid=0, jobs=1): err= 0: pid=1701693: Mon Jul 15 18:13:04 2024 00:21:06.069 read: IOPS=35, BW=35.9MiB/s (37.6MB/s)(361MiB/10065msec) 00:21:06.069 slat (usec): min=539, max=2089.4k, avg=27701.00, stdev=111718.51 00:21:06.069 clat (msec): min=62, max=5780, avg=3236.75, stdev=1611.11 00:21:06.069 lat (msec): min=75, max=5813, avg=3264.46, stdev=1614.63 00:21:06.069 clat percentiles (msec): 00:21:06.069 | 1.00th=[ 111], 5.00th=[ 447], 10.00th=[ 785], 20.00th=[ 1636], 00:21:06.069 | 30.00th=[ 2299], 40.00th=[ 3071], 50.00th=[ 3306], 60.00th=[ 3708], 00:21:06.069 | 70.00th=[ 4396], 80.00th=[ 5000], 90.00th=[ 5269], 95.00th=[ 5336], 00:21:06.069 | 99.00th=[ 5604], 99.50th=[ 5738], 99.90th=[ 5805], 99.95th=[ 5805], 00:21:06.069 | 99.99th=[ 5805] 00:21:06.069 bw ( KiB/s): min= 2048, max=57344, per=0.86%, avg=34221.79, stdev=15121.03, samples=14 00:21:06.069 iops : min= 2, max= 56, avg=33.29, stdev=14.80, samples=14 00:21:06.069 lat (msec) : 100=0.55%, 250=1.66%, 500=4.43%, 750=2.22%, 1000=5.54% 00:21:06.069 lat (msec) : 2000=13.57%, >=2000=72.02% 00:21:06.069 cpu : usr=0.02%, sys=1.32%, ctx=1225, majf=0, minf=32769 00:21:06.069 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.9%, >=64=82.5% 00:21:06.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.069 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.069 issued rwts: total=361,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.069 job0: (groupid=0, jobs=1): err= 0: pid=1701695: Mon Jul 15 18:13:04 2024 00:21:06.069 read: IOPS=2, BW=2246KiB/s (2300kB/s)(23.0MiB/10485msec) 00:21:06.069 slat (msec): min=4, max=2094, avg=453.48, stdev=847.62 00:21:06.069 clat (msec): min=54, max=10463, avg=5932.31, stdev=3175.56 00:21:06.069 lat (msec): min=2109, max=10484, avg=6385.79, stdev=3039.81 00:21:06.069 clat percentiles (msec): 00:21:06.069 | 1.00th=[ 55], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:06.069 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6409], 60.00th=[ 6477], 00:21:06.069 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:21:06.069 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.069 | 99.99th=[10402] 00:21:06.069 lat (msec) : 100=4.35%, >=2000=95.65% 00:21:06.069 cpu : usr=0.00%, sys=0.20%, ctx=65, majf=0, minf=5889 00:21:06.069 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:21:06.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.069 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:06.069 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.069 job0: (groupid=0, jobs=1): err= 0: pid=1701696: Mon Jul 15 18:13:04 2024 00:21:06.069 read: IOPS=55, BW=55.3MiB/s (58.0MB/s)(560MiB/10132msec) 00:21:06.069 slat (usec): min=54, max=2094.9k, avg=17871.98, stdev=152217.30 00:21:06.069 clat (msec): min=118, max=7170, avg=2229.95, stdev=2474.83 00:21:06.069 lat (msec): min=169, max=7173, avg=2247.83, stdev=2482.61 00:21:06.069 clat percentiles (msec): 00:21:06.069 | 1.00th=[ 186], 5.00th=[ 317], 10.00th=[ 535], 20.00th=[ 776], 00:21:06.069 | 30.00th=[ 818], 40.00th=[ 885], 50.00th=[ 961], 60.00th=[ 978], 00:21:06.069 | 70.00th=[ 1028], 80.00th=[ 5067], 90.00th=[ 7080], 95.00th=[ 7080], 00:21:06.069 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:21:06.069 | 99.99th=[ 7148] 00:21:06.069 bw ( KiB/s): min=14336, max=174080, per=2.24%, avg=88473.60, stdev=63680.76, samples=10 00:21:06.069 iops : min= 14, max= 170, avg=86.40, stdev=62.19, samples=10 00:21:06.069 lat (msec) : 250=3.04%, 500=5.54%, 750=6.07%, 1000=49.46%, 2000=8.57% 00:21:06.069 lat (msec) : >=2000=27.32% 00:21:06.069 cpu : usr=0.02%, sys=1.92%, ctx=524, majf=0, minf=32154 00:21:06.069 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8% 00:21:06.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.069 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.069 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.069 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.069 job0: (groupid=0, jobs=1): err= 0: pid=1701697: Mon Jul 15 18:13:04 2024 00:21:06.069 read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(307MiB/10137msec) 00:21:06.069 slat (usec): min=495, max=1976.3k, avg=32723.84, stdev=148431.88 00:21:06.069 clat (msec): min=88, max=6308, avg=2688.64, stdev=1431.18 00:21:06.069 lat (msec): min=155, max=6348, avg=2721.36, stdev=1441.25 00:21:06.069 clat percentiles (msec): 00:21:06.069 | 1.00th=[ 245], 5.00th=[ 642], 10.00th=[ 978], 20.00th=[ 1770], 00:21:06.069 | 30.00th=[ 2400], 40.00th=[ 2567], 50.00th=[ 2601], 60.00th=[ 2635], 00:21:06.069 | 70.00th=[ 2735], 80.00th=[ 2869], 90.00th=[ 6007], 95.00th=[ 6208], 00:21:06.069 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:21:06.069 | 99.99th=[ 6342] 00:21:06.069 bw ( KiB/s): min=28672, max=57344, per=1.16%, avg=45824.88, stdev=10729.01, samples=8 00:21:06.070 iops : min= 28, max= 56, avg=44.62, stdev=10.41, samples=8 00:21:06.070 lat (msec) : 100=0.33%, 250=0.98%, 500=2.93%, 750=2.93%, 1000=3.58% 00:21:06.070 lat (msec) : 2000=14.01%, >=2000=75.24% 00:21:06.070 cpu : usr=0.00%, sys=1.03%, ctx=978, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:21:06.070 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701698: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=40, BW=41.0MiB/s (42.9MB/s)(413MiB/10084msec) 00:21:06.070 slat (usec): min=552, max=873764, avg=24324.91, stdev=47601.58 00:21:06.070 clat (msec): min=35, max=5950, avg=2736.86, stdev=1085.18 00:21:06.070 lat (msec): min=94, max=5976, avg=2761.19, stdev=1086.50 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 108], 5.00th=[ 498], 10.00th=[ 911], 20.00th=[ 1804], 00:21:06.070 | 30.00th=[ 2567], 40.00th=[ 2802], 50.00th=[ 2869], 60.00th=[ 3037], 00:21:06.070 | 70.00th=[ 3473], 80.00th=[ 3608], 90.00th=[ 3876], 95.00th=[ 4044], 00:21:06.070 | 99.00th=[ 4245], 99.50th=[ 5940], 99.90th=[ 5940], 99.95th=[ 5940], 00:21:06.070 | 99.99th=[ 5940] 00:21:06.070 bw ( KiB/s): min=22483, max=57229, per=1.05%, avg=41592.29, stdev=10552.32, samples=14 00:21:06.070 iops : min= 21, max= 55, avg=40.43, stdev=10.33, samples=14 00:21:06.070 lat (msec) : 50=0.24%, 100=0.48%, 250=1.94%, 500=2.42%, 750=2.91% 00:21:06.070 lat (msec) : 1000=2.66%, 2000=11.38%, >=2000=77.97% 00:21:06.070 cpu : usr=0.09%, sys=1.17%, ctx=1377, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.7% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.070 issued rwts: total=413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701699: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=79, BW=79.9MiB/s (83.8MB/s)(805MiB/10075msec) 00:21:06.070 slat (usec): min=56, max=85405, avg=12431.41, stdev=16728.81 00:21:06.070 clat (msec): min=62, max=2144, avg=1464.24, stdev=448.18 00:21:06.070 lat (msec): min=85, max=2154, avg=1476.67, stdev=447.37 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 279], 5.00th=[ 659], 10.00th=[ 768], 20.00th=[ 1020], 00:21:06.070 | 30.00th=[ 1217], 40.00th=[ 1435], 50.00th=[ 1603], 60.00th=[ 1670], 00:21:06.070 | 70.00th=[ 1770], 80.00th=[ 1854], 90.00th=[ 1972], 95.00th=[ 2039], 00:21:06.070 | 99.00th=[ 2140], 99.50th=[ 2140], 99.90th=[ 2140], 99.95th=[ 2140], 00:21:06.070 | 99.99th=[ 2140] 00:21:06.070 bw ( KiB/s): min=32768, max=192512, per=2.06%, avg=81607.65, stdev=40528.83, samples=17 00:21:06.070 iops : min= 32, max= 188, avg=79.65, stdev=39.58, samples=17 00:21:06.070 lat (msec) : 100=0.37%, 250=0.50%, 500=0.87%, 750=7.33%, 1000=10.31% 00:21:06.070 lat (msec) : 2000=72.42%, >=2000=8.20% 00:21:06.070 cpu : usr=0.08%, sys=1.85%, ctx=1548, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.070 issued rwts: total=805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701700: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=16, BW=16.1MiB/s (16.9MB/s)(163MiB/10103msec) 00:21:06.070 slat (usec): min=60, max=2110.8k, avg=61609.11, stdev=314651.73 00:21:06.070 clat (msec): min=59, max=9803, avg=2119.35, stdev=3108.59 00:21:06.070 lat (msec): min=132, max=9806, avg=2180.96, stdev=3163.60 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 133], 5.00th=[ 157], 10.00th=[ 241], 20.00th=[ 347], 00:21:06.070 | 30.00th=[ 456], 40.00th=[ 584], 50.00th=[ 718], 60.00th=[ 1053], 00:21:06.070 | 70.00th=[ 1418], 80.00th=[ 1569], 90.00th=[ 9731], 95.00th=[ 9731], 00:21:06.070 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:21:06.070 | 99.99th=[ 9866] 00:21:06.070 bw ( KiB/s): min=70274, max=70274, per=1.78%, avg=70274.00, stdev= 0.00, samples=1 00:21:06.070 iops : min= 68, max= 68, avg=68.00, stdev= 0.00, samples=1 00:21:06.070 lat (msec) : 100=0.61%, 250=11.66%, 500=23.93%, 750=16.56%, 1000=6.75% 00:21:06.070 lat (msec) : 2000=21.47%, >=2000=19.02% 00:21:06.070 cpu : usr=0.01%, sys=1.07%, ctx=273, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.8%, 32=19.6%, >=64=61.3% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=97.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.7% 00:21:06.070 issued rwts: total=163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701701: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=15, BW=15.5MiB/s (16.2MB/s)(156MiB/10096msec) 00:21:06.070 slat (usec): min=478, max=2103.2k, avg=64117.87, stdev=321082.90 00:21:06.070 clat (msec): min=92, max=9991, avg=3673.15, stdev=3882.56 00:21:06.070 lat (msec): min=101, max=10000, avg=3737.27, stdev=3905.52 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 103], 5.00th=[ 292], 10.00th=[ 514], 20.00th=[ 776], 00:21:06.070 | 30.00th=[ 986], 40.00th=[ 1133], 50.00th=[ 1334], 60.00th=[ 1586], 00:21:06.070 | 70.00th=[ 5940], 80.00th=[ 9463], 90.00th=[ 9866], 95.00th=[10000], 00:21:06.070 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:21:06.070 | 99.99th=[10000] 00:21:06.070 bw ( KiB/s): min=24576, max=32833, per=0.73%, avg=28704.50, stdev=5838.58, samples=2 00:21:06.070 iops : min= 24, max= 32, avg=28.00, stdev= 5.66, samples=2 00:21:06.070 lat (msec) : 100=0.64%, 250=3.21%, 500=5.13%, 750=7.69%, 1000=14.10% 00:21:06.070 lat (msec) : 2000=34.62%, >=2000=34.62% 00:21:06.070 cpu : usr=0.00%, sys=1.26%, ctx=292, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.1%, 16=10.3%, 32=20.5%, >=64=59.6% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=96.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.3% 00:21:06.070 issued rwts: total=156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701702: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=21, BW=21.8MiB/s (22.9MB/s)(220MiB/10069msec) 00:21:06.070 slat (usec): min=901, max=2110.6k, avg=45482.73, stdev=241860.04 00:21:06.070 clat (msec): min=61, max=9244, avg=4744.11, stdev=3979.51 00:21:06.070 lat (msec): min=70, max=9254, avg=4789.59, stdev=3982.07 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 78], 5.00th=[ 157], 10.00th=[ 266], 20.00th=[ 514], 00:21:06.070 | 30.00th=[ 726], 40.00th=[ 1200], 50.00th=[ 5873], 60.00th=[ 8288], 00:21:06.070 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9194], 00:21:06.070 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:21:06.070 | 99.99th=[ 9194] 00:21:06.070 bw ( KiB/s): min=28672, max=100352, per=1.60%, avg=63488.00, stdev=35883.86, samples=3 00:21:06.070 iops : min= 28, max= 98, avg=62.00, stdev=35.04, samples=3 00:21:06.070 lat (msec) : 100=2.73%, 250=6.36%, 500=10.00%, 750=11.36%, 1000=5.00% 00:21:06.070 lat (msec) : 2000=12.27%, >=2000=52.27% 00:21:06.070 cpu : usr=0.00%, sys=1.09%, ctx=632, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.3%, 32=14.5%, >=64=71.4% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:21:06.070 issued rwts: total=220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701703: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=39, BW=39.7MiB/s (41.6MB/s)(401MiB/10106msec) 00:21:06.070 slat (usec): min=424, max=1076.5k, avg=24962.96, stdev=58337.75 00:21:06.070 clat (msec): min=92, max=5508, avg=2962.56, stdev=1103.98 00:21:06.070 lat (msec): min=108, max=5579, avg=2987.53, stdev=1105.87 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 284], 5.00th=[ 1003], 10.00th=[ 1284], 20.00th=[ 2056], 00:21:06.070 | 30.00th=[ 2433], 40.00th=[ 2869], 50.00th=[ 3104], 60.00th=[ 3440], 00:21:06.070 | 70.00th=[ 3775], 80.00th=[ 4077], 90.00th=[ 4279], 95.00th=[ 4396], 00:21:06.070 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 5537], 99.95th=[ 5537], 00:21:06.070 | 99.99th=[ 5537] 00:21:06.070 bw ( KiB/s): min=14336, max=63488, per=0.95%, avg=37398.87, stdev=14185.27, samples=15 00:21:06.070 iops : min= 14, max= 62, avg=36.33, stdev=13.99, samples=15 00:21:06.070 lat (msec) : 100=0.25%, 250=0.75%, 500=1.00%, 750=1.75%, 1000=1.25% 00:21:06.070 lat (msec) : 2000=14.21%, >=2000=80.80% 00:21:06.070 cpu : usr=0.00%, sys=1.39%, ctx=1337, majf=0, minf=32769 00:21:06.070 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.070 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job0: (groupid=0, jobs=1): err= 0: pid=1701704: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=1, BW=1760KiB/s (1802kB/s)(18.0MiB/10473msec) 00:21:06.070 slat (usec): min=1339, max=2100.9k, avg=578871.82, stdev=926491.23 00:21:06.070 clat (msec): min=52, max=10470, avg=6115.66, stdev=3572.55 00:21:06.070 lat (msec): min=2094, max=10472, avg=6694.53, stdev=3370.79 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 53], 5.00th=[ 53], 10.00th=[ 2089], 20.00th=[ 2140], 00:21:06.070 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:21:06.070 | 70.00th=[ 8658], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:21:06.070 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.070 | 99.99th=[10537] 00:21:06.070 lat (msec) : 100=5.56%, >=2000=94.44% 00:21:06.070 cpu : usr=0.00%, sys=0.18%, ctx=64, majf=0, minf=4609 00:21:06.070 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:06.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.070 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:06.070 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.070 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.070 job1: (groupid=0, jobs=1): err= 0: pid=1701725: Mon Jul 15 18:13:04 2024 00:21:06.070 read: IOPS=2, BW=2051KiB/s (2101kB/s)(21.0MiB/10483msec) 00:21:06.070 slat (msec): min=6, max=2109, avg=496.65, stdev=881.17 00:21:06.070 clat (msec): min=52, max=10467, avg=6512.42, stdev=2435.78 00:21:06.070 lat (msec): min=2162, max=10482, avg=7009.07, stdev=2091.86 00:21:06.070 clat percentiles (msec): 00:21:06.070 | 1.00th=[ 53], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6409], 00:21:06.071 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 6477], 00:21:06.071 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[10402], 00:21:06.071 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.071 | 99.99th=[10402] 00:21:06.071 lat (msec) : 100=4.76%, >=2000=95.24% 00:21:06.071 cpu : usr=0.00%, sys=0.18%, ctx=70, majf=0, minf=5377 00:21:06.071 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:06.071 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701726: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=3, BW=3477KiB/s (3561kB/s)(36.0MiB/10602msec) 00:21:06.071 slat (usec): min=898, max=2104.1k, avg=293094.10, stdev=704771.35 00:21:06.071 clat (msec): min=50, max=10599, avg=7869.39, stdev=3335.60 00:21:06.071 lat (msec): min=2094, max=10601, avg=8162.49, stdev=3082.93 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 51], 5.00th=[ 2089], 10.00th=[ 2140], 20.00th=[ 4279], 00:21:06.071 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10402], 60.00th=[10402], 00:21:06.071 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:06.071 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.071 | 99.99th=[10537] 00:21:06.071 lat (msec) : 100=2.78%, >=2000=97.22% 00:21:06.071 cpu : usr=0.00%, sys=0.26%, ctx=96, majf=0, minf=9217 00:21:06.071 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.071 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701727: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=1, BW=1269KiB/s (1299kB/s)(13.0MiB/10491msec) 00:21:06.071 slat (msec): min=18, max=2105, avg=803.10, stdev=1010.34 00:21:06.071 clat (msec): min=50, max=10427, avg=5889.15, stdev=3325.89 00:21:06.071 lat (msec): min=2129, max=10490, avg=6692.26, stdev=3047.40 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 51], 5.00th=[ 51], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:06.071 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 6409], 60.00th=[ 6477], 00:21:06.071 | 70.00th=[ 8557], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:21:06.071 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.071 | 99.99th=[10402] 00:21:06.071 lat (msec) : 100=7.69%, >=2000=92.31% 00:21:06.071 cpu : usr=0.00%, sys=0.10%, ctx=75, majf=0, minf=3329 00:21:06.071 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701728: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=9, BW=9.88MiB/s (10.4MB/s)(105MiB/10624msec) 00:21:06.071 slat (usec): min=393, max=2093.2k, avg=100690.89, stdev=409363.01 00:21:06.071 clat (msec): min=50, max=10619, avg=8491.43, stdev=2607.45 00:21:06.071 lat (msec): min=2087, max=10623, avg=8592.13, stdev=2479.34 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 2089], 5.00th=[ 4245], 10.00th=[ 6074], 20.00th=[ 6208], 00:21:06.071 | 30.00th=[ 6342], 40.00th=[ 8490], 50.00th=[10134], 60.00th=[10268], 00:21:06.071 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10671], 00:21:06.071 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:06.071 | 99.99th=[10671] 00:21:06.071 lat (msec) : 100=0.95%, >=2000=99.05% 00:21:06.071 cpu : usr=0.00%, sys=0.85%, ctx=181, majf=0, minf=26881 00:21:06.071 IO depths : 1=1.0%, 2=1.9%, 4=3.8%, 8=7.6%, 16=15.2%, 32=30.5%, >=64=40.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:06.071 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701729: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=58, BW=58.1MiB/s (61.0MB/s)(615MiB/10577msec) 00:21:06.071 slat (usec): min=56, max=2078.5k, avg=17100.71, stdev=140326.29 00:21:06.071 clat (msec): min=56, max=6392, avg=2084.65, stdev=1740.48 00:21:06.071 lat (msec): min=748, max=6423, avg=2101.75, stdev=1748.56 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 751], 5.00th=[ 751], 10.00th=[ 760], 20.00th=[ 785], 00:21:06.071 | 30.00th=[ 827], 40.00th=[ 877], 50.00th=[ 902], 60.00th=[ 2072], 00:21:06.071 | 70.00th=[ 2467], 80.00th=[ 4866], 90.00th=[ 5336], 95.00th=[ 5403], 00:21:06.071 | 99.00th=[ 5403], 99.50th=[ 5403], 99.90th=[ 6409], 99.95th=[ 6409], 00:21:06.071 | 99.99th=[ 6409] 00:21:06.071 bw ( KiB/s): min= 6144, max=176128, per=2.29%, avg=90638.55, stdev=71866.61, samples=11 00:21:06.071 iops : min= 6, max= 172, avg=88.45, stdev=70.10, samples=11 00:21:06.071 lat (msec) : 100=0.16%, 750=1.95%, 1000=53.01%, 2000=3.58%, >=2000=41.30% 00:21:06.071 cpu : usr=0.04%, sys=1.51%, ctx=601, majf=0, minf=32769 00:21:06.071 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.071 issued rwts: total=615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701730: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=2, BW=2151KiB/s (2203kB/s)(22.0MiB/10471msec) 00:21:06.071 slat (msec): min=5, max=4259, avg=473.32, stdev=1091.56 00:21:06.071 clat (msec): min=56, max=10456, avg=3770.67, stdev=2973.26 00:21:06.071 lat (msec): min=2087, max=10470, avg=4243.99, stdev=3175.85 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 57], 5.00th=[ 2089], 10.00th=[ 2089], 20.00th=[ 2106], 00:21:06.071 | 30.00th=[ 2123], 40.00th=[ 2140], 50.00th=[ 2140], 60.00th=[ 2165], 00:21:06.071 | 70.00th=[ 4279], 80.00th=[ 4329], 90.00th=[ 8658], 95.00th=[10402], 00:21:06.071 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.071 | 99.99th=[10402] 00:21:06.071 lat (msec) : 100=4.55%, >=2000=95.45% 00:21:06.071 cpu : usr=0.00%, sys=0.16%, ctx=77, majf=0, minf=5633 00:21:06.071 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:06.071 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701731: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=7, BW=8128KiB/s (8323kB/s)(84.0MiB/10583msec) 00:21:06.071 slat (usec): min=530, max=2124.9k, avg=125738.06, stdev=445334.86 00:21:06.071 clat (msec): min=20, max=10580, avg=4592.94, stdev=4050.40 00:21:06.071 lat (msec): min=1162, max=10582, avg=4718.68, stdev=4070.64 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 21], 5.00th=[ 1234], 10.00th=[ 1284], 20.00th=[ 1469], 00:21:06.071 | 30.00th=[ 1620], 40.00th=[ 1804], 50.00th=[ 1938], 60.00th=[ 2106], 00:21:06.071 | 70.00th=[ 8557], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:06.071 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.071 | 99.99th=[10537] 00:21:06.071 lat (msec) : 50=1.19%, 2000=48.81%, >=2000=50.00% 00:21:06.071 cpu : usr=0.01%, sys=0.58%, ctx=245, majf=0, minf=21505 00:21:06.071 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.5%, 16=19.0%, 32=38.1%, >=64=25.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:06.071 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701732: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=5, BW=5876KiB/s (6017kB/s)(61.0MiB/10631msec) 00:21:06.071 slat (usec): min=951, max=2145.7k, avg=173333.89, stdev=566891.28 00:21:06.071 clat (msec): min=56, max=10626, avg=9715.24, stdev=2206.32 00:21:06.071 lat (msec): min=2175, max=10630, avg=9888.57, stdev=1815.61 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 57], 5.00th=[ 4279], 10.00th=[ 8557], 20.00th=[10402], 00:21:06.071 | 30.00th=[10537], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:21:06.071 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:21:06.071 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:21:06.071 | 99.99th=[10671] 00:21:06.071 lat (msec) : 100=1.64%, >=2000=98.36% 00:21:06.071 cpu : usr=0.00%, sys=0.64%, ctx=118, majf=0, minf=15617 00:21:06.071 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.071 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701733: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=70, BW=70.0MiB/s (73.4MB/s)(705MiB/10069msec) 00:21:06.071 slat (usec): min=45, max=2066.6k, avg=14241.36, stdev=98952.12 00:21:06.071 clat (msec): min=25, max=4864, avg=1311.50, stdev=989.22 00:21:06.071 lat (msec): min=84, max=4869, avg=1325.74, stdev=997.53 00:21:06.071 clat percentiles (msec): 00:21:06.071 | 1.00th=[ 94], 5.00th=[ 384], 10.00th=[ 785], 20.00th=[ 885], 00:21:06.071 | 30.00th=[ 919], 40.00th=[ 953], 50.00th=[ 986], 60.00th=[ 1003], 00:21:06.071 | 70.00th=[ 1062], 80.00th=[ 1653], 90.00th=[ 1888], 95.00th=[ 4732], 00:21:06.071 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:21:06.071 | 99.99th=[ 4866] 00:21:06.071 bw ( KiB/s): min=30720, max=147456, per=2.72%, avg=107426.91, stdev=42845.45, samples=11 00:21:06.071 iops : min= 30, max= 144, avg=104.91, stdev=41.84, samples=11 00:21:06.071 lat (msec) : 50=0.14%, 100=1.28%, 250=1.99%, 500=2.84%, 750=2.55% 00:21:06.071 lat (msec) : 1000=51.49%, 2000=31.49%, >=2000=8.23% 00:21:06.071 cpu : usr=0.00%, sys=1.24%, ctx=862, majf=0, minf=32769 00:21:06.071 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:21:06.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.071 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.071 issued rwts: total=705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.071 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.071 job1: (groupid=0, jobs=1): err= 0: pid=1701734: Mon Jul 15 18:13:04 2024 00:21:06.071 read: IOPS=32, BW=32.0MiB/s (33.6MB/s)(339MiB/10580msec) 00:21:06.072 slat (usec): min=61, max=2053.9k, avg=31067.60, stdev=184439.19 00:21:06.072 clat (msec): min=46, max=6482, avg=3360.58, stdev=1702.24 00:21:06.072 lat (msec): min=882, max=8455, avg=3391.65, stdev=1709.08 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 885], 5.00th=[ 894], 10.00th=[ 927], 20.00th=[ 995], 00:21:06.072 | 30.00th=[ 1871], 40.00th=[ 3540], 50.00th=[ 3876], 60.00th=[ 4111], 00:21:06.072 | 70.00th=[ 4245], 80.00th=[ 5201], 90.00th=[ 5403], 95.00th=[ 5537], 00:21:06.072 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 6477], 99.95th=[ 6477], 00:21:06.072 | 99.99th=[ 6477] 00:21:06.072 bw ( KiB/s): min= 6131, max=155648, per=1.56%, avg=61727.14, stdev=57500.99, samples=7 00:21:06.072 iops : min= 5, max= 152, avg=60.00, stdev=56.45, samples=7 00:21:06.072 lat (msec) : 50=0.29%, 1000=20.65%, 2000=11.80%, >=2000=67.26% 00:21:06.072 cpu : usr=0.00%, sys=1.11%, ctx=463, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.4% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:06.072 issued rwts: total=339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job1: (groupid=0, jobs=1): err= 0: pid=1701735: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=6, BW=6585KiB/s (6743kB/s)(68.0MiB/10574msec) 00:21:06.072 slat (usec): min=1033, max=2173.5k, avg=155473.91, stdev=500566.92 00:21:06.072 clat (usec): min=1019, max=10571k, avg=6257544.84, stdev=4272829.15 00:21:06.072 lat (msec): min=1256, max=10573, avg=6413.02, stdev=4233.95 00:21:06.072 clat percentiles (usec): 00:21:06.072 | 1.00th=[ 1020], 5.00th=[ 1350566], 10.00th=[ 1451230], 00:21:06.072 | 20.00th=[ 1669333], 30.00th=[ 1904215], 40.00th=[ 2105541], 00:21:06.072 | 50.00th=[ 6408897], 60.00th=[10401874], 70.00th=[10536092], 00:21:06.072 | 80.00th=[10536092], 90.00th=[10536092], 95.00th=[10536092], 00:21:06.072 | 99.00th=[10536092], 99.50th=[10536092], 99.90th=[10536092], 00:21:06.072 | 99.95th=[10536092], 99.99th=[10536092] 00:21:06.072 lat (msec) : 2=1.47%, 2000=30.88%, >=2000=67.65% 00:21:06.072 cpu : usr=0.01%, sys=0.59%, ctx=233, majf=0, minf=17409 00:21:06.072 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:06.072 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job1: (groupid=0, jobs=1): err= 0: pid=1701736: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=151, BW=152MiB/s (159MB/s)(1596MiB/10514msec) 00:21:06.072 slat (usec): min=45, max=2108.0k, avg=6580.25, stdev=54953.51 00:21:06.072 clat (usec): min=927, max=3111.8k, avg=784680.50, stdev=656101.06 00:21:06.072 lat (msec): min=375, max=3114, avg=791.26, stdev=658.70 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 376], 5.00th=[ 388], 10.00th=[ 405], 20.00th=[ 426], 00:21:06.072 | 30.00th=[ 451], 40.00th=[ 489], 50.00th=[ 498], 60.00th=[ 518], 00:21:06.072 | 70.00th=[ 751], 80.00th=[ 969], 90.00th=[ 1351], 95.00th=[ 2802], 00:21:06.072 | 99.00th=[ 3037], 99.50th=[ 3037], 99.90th=[ 3071], 99.95th=[ 3104], 00:21:06.072 | 99.99th=[ 3104] 00:21:06.072 bw ( KiB/s): min=30720, max=341333, per=5.06%, avg=200084.33, stdev=94453.95, samples=15 00:21:06.072 iops : min= 30, max= 333, avg=195.27, stdev=92.20, samples=15 00:21:06.072 lat (usec) : 1000=0.06% 00:21:06.072 lat (msec) : 500=51.25%, 750=18.48%, 1000=13.35%, 2000=8.90%, >=2000=7.96% 00:21:06.072 cpu : usr=0.10%, sys=2.33%, ctx=1533, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.072 issued rwts: total=1596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job1: (groupid=0, jobs=1): err= 0: pid=1701738: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=31, BW=31.6MiB/s (33.1MB/s)(320MiB/10142msec) 00:21:06.072 slat (usec): min=93, max=2145.3k, avg=31397.55, stdev=187798.21 00:21:06.072 clat (msec): min=93, max=8658, avg=3794.98, stdev=3178.11 00:21:06.072 lat (msec): min=154, max=8666, avg=3826.38, stdev=3183.84 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 161], 5.00th=[ 279], 10.00th=[ 542], 20.00th=[ 743], 00:21:06.072 | 30.00th=[ 877], 40.00th=[ 1183], 50.00th=[ 3339], 60.00th=[ 3775], 00:21:06.072 | 70.00th=[ 5940], 80.00th=[ 7953], 90.00th=[ 8356], 95.00th=[ 8490], 00:21:06.072 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:21:06.072 | 99.99th=[ 8658] 00:21:06.072 bw ( KiB/s): min= 4096, max=124928, per=0.99%, avg=39326.90, stdev=39306.90, samples=10 00:21:06.072 iops : min= 4, max= 122, avg=38.40, stdev=38.39, samples=10 00:21:06.072 lat (msec) : 100=0.31%, 250=3.12%, 500=6.25%, 750=12.50%, 1000=10.63% 00:21:06.072 lat (msec) : 2000=14.69%, >=2000=52.50% 00:21:06.072 cpu : usr=0.00%, sys=1.48%, ctx=555, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:06.072 issued rwts: total=320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job2: (groupid=0, jobs=1): err= 0: pid=1701744: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=134, BW=134MiB/s (141MB/s)(1417MiB/10553msec) 00:21:06.072 slat (usec): min=42, max=2108.5k, avg=7400.29, stdev=74677.98 00:21:06.072 clat (msec): min=59, max=5153, avg=604.45, stdev=585.72 00:21:06.072 lat (msec): min=356, max=5167, avg=611.85, stdev=601.21 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 359], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 363], 00:21:06.072 | 30.00th=[ 368], 40.00th=[ 372], 50.00th=[ 397], 60.00th=[ 460], 00:21:06.072 | 70.00th=[ 558], 80.00th=[ 760], 90.00th=[ 978], 95.00th=[ 1217], 00:21:06.072 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 5134], 00:21:06.072 | 99.99th=[ 5134] 00:21:06.072 bw ( KiB/s): min=61440, max=360448, per=6.06%, avg=239919.45, stdev=110464.13, samples=11 00:21:06.072 iops : min= 60, max= 352, avg=234.27, stdev=107.90, samples=11 00:21:06.072 lat (msec) : 100=0.07%, 500=66.62%, 750=12.21%, 1000=13.34%, 2000=5.93% 00:21:06.072 lat (msec) : >=2000=1.83% 00:21:06.072 cpu : usr=0.04%, sys=2.05%, ctx=1511, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.6% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.072 issued rwts: total=1417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job2: (groupid=0, jobs=1): err= 0: pid=1701745: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=21, BW=21.7MiB/s (22.7MB/s)(227MiB/10479msec) 00:21:06.072 slat (usec): min=46, max=2072.9k, avg=45938.51, stdev=267682.28 00:21:06.072 clat (msec): min=49, max=7976, avg=4618.62, stdev=3310.49 00:21:06.072 lat (msec): min=734, max=7977, avg=4664.56, stdev=3293.47 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 735], 5.00th=[ 735], 10.00th=[ 735], 20.00th=[ 743], 00:21:06.072 | 30.00th=[ 768], 40.00th=[ 1653], 50.00th=[ 7349], 60.00th=[ 7416], 00:21:06.072 | 70.00th=[ 7617], 80.00th=[ 7684], 90.00th=[ 7886], 95.00th=[ 7886], 00:21:06.072 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:21:06.072 | 99.99th=[ 7953] 00:21:06.072 bw ( KiB/s): min= 4096, max=118784, per=1.02%, avg=40550.40, stdev=50755.70, samples=5 00:21:06.072 iops : min= 4, max= 116, avg=39.60, stdev=49.57, samples=5 00:21:06.072 lat (msec) : 50=0.44%, 750=26.43%, 1000=11.45%, 2000=1.76%, >=2000=59.91% 00:21:06.072 cpu : usr=0.02%, sys=1.06%, ctx=215, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.1%, >=64=72.2% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:21:06.072 issued rwts: total=227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job2: (groupid=0, jobs=1): err= 0: pid=1701746: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=40, BW=40.5MiB/s (42.4MB/s)(406MiB/10036msec) 00:21:06.072 slat (usec): min=35, max=2090.0k, avg=24624.60, stdev=173000.05 00:21:06.072 clat (msec): min=35, max=5760, avg=2362.25, stdev=2159.47 00:21:06.072 lat (msec): min=36, max=5771, avg=2386.87, stdev=2163.77 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 42], 5.00th=[ 134], 10.00th=[ 249], 20.00th=[ 514], 00:21:06.072 | 30.00th=[ 927], 40.00th=[ 1053], 50.00th=[ 1250], 60.00th=[ 1435], 00:21:06.072 | 70.00th=[ 3608], 80.00th=[ 5403], 90.00th=[ 5671], 95.00th=[ 5671], 00:21:06.072 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:21:06.072 | 99.99th=[ 5738] 00:21:06.072 bw ( KiB/s): min=14307, max=186368, per=1.41%, avg=55877.00, stdev=61519.61, samples=7 00:21:06.072 iops : min= 13, max= 182, avg=54.43, stdev=60.19, samples=7 00:21:06.072 lat (msec) : 50=2.22%, 250=7.88%, 500=7.64%, 750=7.39%, 1000=10.59% 00:21:06.072 lat (msec) : 2000=29.56%, >=2000=34.73% 00:21:06.072 cpu : usr=0.04%, sys=1.28%, ctx=581, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.9%, >=64=84.5% 00:21:06.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.072 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.072 issued rwts: total=406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.072 job2: (groupid=0, jobs=1): err= 0: pid=1701747: Mon Jul 15 18:13:04 2024 00:21:06.072 read: IOPS=82, BW=82.6MiB/s (86.7MB/s)(865MiB/10466msec) 00:21:06.072 slat (usec): min=43, max=1779.2k, avg=12028.89, stdev=84120.95 00:21:06.072 clat (msec): min=54, max=3422, avg=1181.48, stdev=564.17 00:21:06.072 lat (msec): min=755, max=3428, avg=1193.51, stdev=568.71 00:21:06.072 clat percentiles (msec): 00:21:06.072 | 1.00th=[ 760], 5.00th=[ 760], 10.00th=[ 768], 20.00th=[ 785], 00:21:06.072 | 30.00th=[ 810], 40.00th=[ 835], 50.00th=[ 860], 60.00th=[ 894], 00:21:06.072 | 70.00th=[ 1502], 80.00th=[ 1770], 90.00th=[ 2039], 95.00th=[ 2232], 00:21:06.072 | 99.00th=[ 3306], 99.50th=[ 3373], 99.90th=[ 3406], 99.95th=[ 3406], 00:21:06.072 | 99.99th=[ 3406] 00:21:06.072 bw ( KiB/s): min=22528, max=163840, per=3.18%, avg=125779.67, stdev=52035.25, samples=12 00:21:06.072 iops : min= 22, max= 160, avg=122.75, stdev=50.76, samples=12 00:21:06.072 lat (msec) : 100=0.12%, 1000=67.75%, 2000=21.97%, >=2000=10.17% 00:21:06.072 cpu : usr=0.02%, sys=1.82%, ctx=902, majf=0, minf=32769 00:21:06.072 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.7% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.073 issued rwts: total=865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701748: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=6, BW=7012KiB/s (7180kB/s)(72.0MiB/10515msec) 00:21:06.073 slat (usec): min=426, max=2098.9k, avg=145778.16, stdev=491669.56 00:21:06.073 clat (msec): min=17, max=10512, avg=6959.89, stdev=3991.58 00:21:06.073 lat (msec): min=1573, max=10514, avg=7105.67, stdev=3925.60 00:21:06.073 clat percentiles (msec): 00:21:06.073 | 1.00th=[ 18], 5.00th=[ 1603], 10.00th=[ 1737], 20.00th=[ 1871], 00:21:06.073 | 30.00th=[ 2089], 40.00th=[ 6342], 50.00th=[10268], 60.00th=[10402], 00:21:06.073 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:06.073 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.073 | 99.99th=[10537] 00:21:06.073 lat (msec) : 20=1.39%, 2000=25.00%, >=2000=73.61% 00:21:06.073 cpu : usr=0.01%, sys=0.49%, ctx=146, majf=0, minf=18433 00:21:06.073 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:06.073 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701749: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=76, BW=76.3MiB/s (80.0MB/s)(810MiB/10621msec) 00:21:06.073 slat (usec): min=46, max=2097.7k, avg=13042.09, stdev=98266.53 00:21:06.073 clat (msec): min=52, max=6140, avg=1528.00, stdev=1823.17 00:21:06.073 lat (msec): min=480, max=6145, avg=1541.04, stdev=1829.24 00:21:06.073 clat percentiles (msec): 00:21:06.073 | 1.00th=[ 481], 5.00th=[ 481], 10.00th=[ 485], 20.00th=[ 485], 00:21:06.073 | 30.00th=[ 489], 40.00th=[ 493], 50.00th=[ 514], 60.00th=[ 659], 00:21:06.073 | 70.00th=[ 1070], 80.00th=[ 2198], 90.00th=[ 5537], 95.00th=[ 5873], 00:21:06.073 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:21:06.073 | 99.99th=[ 6141] 00:21:06.073 bw ( KiB/s): min= 2048, max=264192, per=3.21%, avg=126873.18, stdev=108973.64, samples=11 00:21:06.073 iops : min= 2, max= 258, avg=123.73, stdev=106.33, samples=11 00:21:06.073 lat (msec) : 100=0.12%, 500=45.06%, 750=18.89%, 1000=5.31%, 2000=8.02% 00:21:06.073 lat (msec) : >=2000=22.59% 00:21:06.073 cpu : usr=0.01%, sys=1.94%, ctx=1144, majf=0, minf=32769 00:21:06.073 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.073 issued rwts: total=810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701750: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=25, BW=25.8MiB/s (27.1MB/s)(273MiB/10570msec) 00:21:06.073 slat (usec): min=59, max=2112.3k, avg=38494.51, stdev=243045.73 00:21:06.073 clat (msec): min=59, max=9140, avg=4600.55, stdev=3543.11 00:21:06.073 lat (msec): min=795, max=9150, avg=4639.04, stdev=3538.40 00:21:06.073 clat percentiles (msec): 00:21:06.073 | 1.00th=[ 793], 5.00th=[ 844], 10.00th=[ 877], 20.00th=[ 1003], 00:21:06.073 | 30.00th=[ 1133], 40.00th=[ 1301], 50.00th=[ 3037], 60.00th=[ 6342], 00:21:06.073 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9060], 00:21:06.073 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:21:06.073 | 99.99th=[ 9194] 00:21:06.073 bw ( KiB/s): min= 8192, max=157696, per=1.25%, avg=49493.33, stdev=58321.87, samples=6 00:21:06.073 iops : min= 8, max= 154, avg=48.33, stdev=56.95, samples=6 00:21:06.073 lat (msec) : 100=0.37%, 1000=19.78%, 2000=21.98%, >=2000=57.88% 00:21:06.073 cpu : usr=0.01%, sys=0.87%, ctx=377, majf=0, minf=32769 00:21:06.073 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.7%, >=64=76.9% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:21:06.073 issued rwts: total=273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701751: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=24, BW=25.0MiB/s (26.2MB/s)(261MiB/10455msec) 00:21:06.073 slat (usec): min=44, max=2093.7k, avg=38406.74, stdev=246135.61 00:21:06.073 clat (msec): min=429, max=8987, avg=1566.54, stdev=1891.55 00:21:06.073 lat (msec): min=502, max=8994, avg=1604.95, stdev=1950.74 00:21:06.073 clat percentiles (msec): 00:21:06.073 | 1.00th=[ 506], 5.00th=[ 523], 10.00th=[ 609], 20.00th=[ 802], 00:21:06.073 | 30.00th=[ 911], 40.00th=[ 944], 50.00th=[ 978], 60.00th=[ 978], 00:21:06.073 | 70.00th=[ 1062], 80.00th=[ 1250], 90.00th=[ 2937], 95.00th=[ 7148], 00:21:06.073 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:21:06.073 | 99.99th=[ 8926] 00:21:06.073 bw ( KiB/s): min=18432, max=160507, per=2.28%, avg=90366.33, stdev=71054.48, samples=3 00:21:06.073 iops : min= 18, max= 156, avg=88.00, stdev=69.02, samples=3 00:21:06.073 lat (msec) : 500=0.38%, 750=17.62%, 1000=47.13%, 2000=22.61%, >=2000=12.26% 00:21:06.073 cpu : usr=0.01%, sys=0.92%, ctx=263, majf=0, minf=32769 00:21:06.073 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.3%, >=64=75.9% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:21:06.073 issued rwts: total=261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701752: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=1, BW=2046KiB/s (2095kB/s)(21.0MiB/10511msec) 00:21:06.073 slat (usec): min=790, max=2093.0k, avg=497958.55, stdev=869849.51 00:21:06.073 clat (msec): min=52, max=10509, avg=5757.65, stdev=3448.71 00:21:06.073 lat (msec): min=2088, max=10510, avg=6255.61, stdev=3336.97 00:21:06.073 clat percentiles (msec): 00:21:06.073 | 1.00th=[ 53], 5.00th=[ 2089], 10.00th=[ 2106], 20.00th=[ 2140], 00:21:06.073 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6477], 00:21:06.073 | 70.00th=[ 8557], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:21:06.073 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.073 | 99.99th=[10537] 00:21:06.073 lat (msec) : 100=4.76%, >=2000=95.24% 00:21:06.073 cpu : usr=0.01%, sys=0.13%, ctx=79, majf=0, minf=5377 00:21:06.073 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:06.073 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701753: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=21, BW=21.1MiB/s (22.1MB/s)(221MiB/10488msec) 00:21:06.073 slat (usec): min=48, max=2116.6k, avg=47192.73, stdev=257501.03 00:21:06.073 clat (msec): min=56, max=6991, avg=2984.65, stdev=1632.23 00:21:06.073 lat (msec): min=969, max=7001, avg=3031.84, stdev=1655.38 00:21:06.073 clat percentiles (msec): 00:21:06.073 | 1.00th=[ 969], 5.00th=[ 978], 10.00th=[ 986], 20.00th=[ 1083], 00:21:06.073 | 30.00th=[ 1938], 40.00th=[ 3004], 50.00th=[ 3171], 60.00th=[ 3306], 00:21:06.073 | 70.00th=[ 3540], 80.00th=[ 3675], 90.00th=[ 5134], 95.00th=[ 6879], 00:21:06.073 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:21:06.073 | 99.99th=[ 7013] 00:21:06.073 bw ( KiB/s): min= 2048, max=124928, per=0.96%, avg=38083.80, stdev=49444.83, samples=5 00:21:06.073 iops : min= 2, max= 122, avg=37.00, stdev=48.36, samples=5 00:21:06.073 lat (msec) : 100=0.45%, 1000=19.46%, 2000=10.86%, >=2000=69.23% 00:21:06.073 cpu : usr=0.02%, sys=0.92%, ctx=247, majf=0, minf=32769 00:21:06.073 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:21:06.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.073 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:21:06.073 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.073 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.073 job2: (groupid=0, jobs=1): err= 0: pid=1701754: Mon Jul 15 18:13:04 2024 00:21:06.073 read: IOPS=3, BW=3422KiB/s (3504kB/s)(35.0MiB/10473msec) 00:21:06.073 slat (msec): min=4, max=2070, avg=297.38, stdev=704.78 00:21:06.073 clat (msec): min=63, max=10418, avg=5414.18, stdev=2894.36 00:21:06.073 lat (msec): min=2089, max=10472, avg=5711.56, stdev=2862.99 00:21:06.073 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 64], 5.00th=[ 2089], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:06.074 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6409], 00:21:06.074 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10402], 95.00th=[10402], 00:21:06.074 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.074 | 99.99th=[10402] 00:21:06.074 lat (msec) : 100=2.86%, >=2000=97.14% 00:21:06.074 cpu : usr=0.01%, sys=0.30%, ctx=72, majf=0, minf=8961 00:21:06.074 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.074 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job2: (groupid=0, jobs=1): err= 0: pid=1701755: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=108, BW=109MiB/s (114MB/s)(1150MiB/10572msec) 00:21:06.074 slat (usec): min=56, max=2065.9k, avg=9183.03, stdev=65833.25 00:21:06.074 clat (usec): min=598, max=3246.8k, avg=1132975.42, stdev=697810.19 00:21:06.074 lat (msec): min=622, max=3258, avg=1142.16, stdev=699.44 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 634], 5.00th=[ 667], 10.00th=[ 718], 20.00th=[ 768], 00:21:06.074 | 30.00th=[ 818], 40.00th=[ 860], 50.00th=[ 902], 60.00th=[ 936], 00:21:06.074 | 70.00th=[ 978], 80.00th=[ 1062], 90.00th=[ 2836], 95.00th=[ 3071], 00:21:06.074 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3239], 99.95th=[ 3239], 00:21:06.074 | 99.99th=[ 3239] 00:21:06.074 bw ( KiB/s): min=34816, max=202752, per=3.31%, avg=130799.38, stdev=44824.15, samples=16 00:21:06.074 iops : min= 34, max= 198, avg=127.69, stdev=43.77, samples=16 00:21:06.074 lat (usec) : 750=0.09% 00:21:06.074 lat (msec) : 750=16.00%, 1000=60.17%, 2000=12.70%, >=2000=11.04% 00:21:06.074 cpu : usr=0.07%, sys=2.86%, ctx=1022, majf=0, minf=32769 00:21:06.074 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.074 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job2: (groupid=0, jobs=1): err= 0: pid=1701756: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=4, BW=4478KiB/s (4585kB/s)(46.0MiB/10519msec) 00:21:06.074 slat (usec): min=782, max=2104.3k, avg=227274.54, stdev=628143.87 00:21:06.074 clat (msec): min=63, max=10516, avg=7547.97, stdev=3250.04 00:21:06.074 lat (msec): min=2092, max=10518, avg=7775.24, stdev=3075.91 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 64], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 4279], 00:21:06.074 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10402], 00:21:06.074 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:06.074 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.074 | 99.99th=[10537] 00:21:06.074 lat (msec) : 100=2.17%, >=2000=97.83% 00:21:06.074 cpu : usr=0.02%, sys=0.42%, ctx=79, majf=0, minf=11777 00:21:06.074 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.074 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job3: (groupid=0, jobs=1): err= 0: pid=1701759: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=3, BW=3709KiB/s (3798kB/s)(38.0MiB/10492msec) 00:21:06.074 slat (usec): min=1057, max=2077.1k, avg=274410.69, stdev=668030.34 00:21:06.074 clat (msec): min=63, max=10407, avg=5761.12, stdev=2839.80 00:21:06.074 lat (msec): min=2086, max=10491, avg=6035.53, stdev=2777.51 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 64], 5.00th=[ 2089], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:06.074 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 6342], 60.00th=[ 6409], 00:21:06.074 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10402], 95.00th=[10402], 00:21:06.074 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.074 | 99.99th=[10402] 00:21:06.074 lat (msec) : 100=2.63%, >=2000=97.37% 00:21:06.074 cpu : usr=0.01%, sys=0.29%, ctx=104, majf=0, minf=9729 00:21:06.074 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.074 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job3: (groupid=0, jobs=1): err= 0: pid=1701760: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=1, BW=1563KiB/s (1601kB/s)(16.0MiB/10480msec) 00:21:06.074 slat (msec): min=12, max=2141, avg=650.87, stdev=962.80 00:21:06.074 clat (msec): min=65, max=10393, avg=6672.77, stdev=2739.06 00:21:06.074 lat (msec): min=2162, max=10479, avg=7323.64, stdev=2259.61 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 66], 5.00th=[ 66], 10.00th=[ 2165], 20.00th=[ 6409], 00:21:06.074 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 6477], 00:21:06.074 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10402], 95.00th=[10402], 00:21:06.074 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:06.074 | 99.99th=[10402] 00:21:06.074 lat (msec) : 100=6.25%, >=2000=93.75% 00:21:06.074 cpu : usr=0.00%, sys=0.15%, ctx=71, majf=0, minf=4097 00:21:06.074 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job3: (groupid=0, jobs=1): err= 0: pid=1701761: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=46, BW=46.1MiB/s (48.4MB/s)(467MiB/10120msec) 00:21:06.074 slat (usec): min=512, max=111024, avg=21474.72, stdev=22258.46 00:21:06.074 clat (msec): min=88, max=4111, avg=2499.22, stdev=937.59 00:21:06.074 lat (msec): min=189, max=4150, avg=2520.70, stdev=937.76 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 401], 5.00th=[ 1183], 10.00th=[ 1485], 20.00th=[ 1720], 00:21:06.074 | 30.00th=[ 1888], 40.00th=[ 2072], 50.00th=[ 2198], 60.00th=[ 2534], 00:21:06.074 | 70.00th=[ 3138], 80.00th=[ 3608], 90.00th=[ 3876], 95.00th=[ 4010], 00:21:06.074 | 99.00th=[ 4077], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4111], 00:21:06.074 | 99.99th=[ 4111] 00:21:06.074 bw ( KiB/s): min=14336, max=133120, per=1.03%, avg=40839.53, stdev=28794.90, samples=17 00:21:06.074 iops : min= 14, max= 130, avg=39.88, stdev=28.12, samples=17 00:21:06.074 lat (msec) : 100=0.21%, 250=0.43%, 500=0.64%, 750=0.86%, 1000=0.86% 00:21:06.074 lat (msec) : 2000=32.76%, >=2000=64.24% 00:21:06.074 cpu : usr=0.05%, sys=1.25%, ctx=1631, majf=0, minf=32769 00:21:06.074 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.074 issued rwts: total=467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job3: (groupid=0, jobs=1): err= 0: pid=1701762: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=39, BW=40.0MiB/s (41.9MB/s)(416MiB/10409msec) 00:21:06.074 slat (usec): min=533, max=1860.4k, avg=25011.61, stdev=96770.36 00:21:06.074 clat (usec): min=1105, max=3518.8k, avg=2559142.78, stdev=481417.35 00:21:06.074 lat (msec): min=1494, max=3530, avg=2584.15, stdev=465.69 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 1502], 5.00th=[ 1787], 10.00th=[ 2022], 20.00th=[ 2198], 00:21:06.074 | 30.00th=[ 2299], 40.00th=[ 2400], 50.00th=[ 2467], 60.00th=[ 2601], 00:21:06.074 | 70.00th=[ 2903], 80.00th=[ 3037], 90.00th=[ 3205], 95.00th=[ 3306], 00:21:06.074 | 99.00th=[ 3473], 99.50th=[ 3507], 99.90th=[ 3507], 99.95th=[ 3507], 00:21:06.074 | 99.99th=[ 3507] 00:21:06.074 bw ( KiB/s): min=10240, max=104448, per=1.35%, avg=53609.09, stdev=28868.69, samples=11 00:21:06.074 iops : min= 10, max= 102, avg=52.18, stdev=28.34, samples=11 00:21:06.074 lat (msec) : 2=0.24%, 2000=8.17%, >=2000=91.59% 00:21:06.074 cpu : usr=0.06%, sys=1.04%, ctx=1418, majf=0, minf=32769 00:21:06.074 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.074 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job3: (groupid=0, jobs=1): err= 0: pid=1701763: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=37, BW=37.9MiB/s (39.8MB/s)(396MiB/10438msec) 00:21:06.074 slat (usec): min=796, max=2072.7k, avg=25250.29, stdev=110152.57 00:21:06.074 clat (msec): min=436, max=5677, avg=2895.78, stdev=1512.06 00:21:06.074 lat (msec): min=449, max=5688, avg=2921.03, stdev=1512.04 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 451], 5.00th=[ 768], 10.00th=[ 1267], 20.00th=[ 1737], 00:21:06.074 | 30.00th=[ 1921], 40.00th=[ 2140], 50.00th=[ 2433], 60.00th=[ 2534], 00:21:06.074 | 70.00th=[ 4396], 80.00th=[ 4866], 90.00th=[ 5201], 95.00th=[ 5269], 00:21:06.074 | 99.00th=[ 5604], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:21:06.074 | 99.99th=[ 5671] 00:21:06.074 bw ( KiB/s): min=12288, max=98304, per=1.26%, avg=49965.09, stdev=24437.97, samples=11 00:21:06.074 iops : min= 12, max= 96, avg=48.73, stdev=23.85, samples=11 00:21:06.074 lat (msec) : 500=2.27%, 750=2.53%, 1000=2.53%, 2000=27.78%, >=2000=64.90% 00:21:06.074 cpu : usr=0.00%, sys=1.04%, ctx=1249, majf=0, minf=32769 00:21:06.074 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:21:06.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.074 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.074 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.074 job3: (groupid=0, jobs=1): err= 0: pid=1701764: Mon Jul 15 18:13:04 2024 00:21:06.074 read: IOPS=111, BW=112MiB/s (117MB/s)(1132MiB/10116msec) 00:21:06.074 slat (usec): min=43, max=96809, avg=8854.18, stdev=11197.85 00:21:06.074 clat (msec): min=87, max=2373, avg=1058.56, stdev=525.04 00:21:06.074 lat (msec): min=157, max=2375, avg=1067.42, stdev=526.73 00:21:06.074 clat percentiles (msec): 00:21:06.074 | 1.00th=[ 368], 5.00th=[ 372], 10.00th=[ 388], 20.00th=[ 456], 00:21:06.074 | 30.00th=[ 701], 40.00th=[ 852], 50.00th=[ 1083], 60.00th=[ 1217], 00:21:06.074 | 70.00th=[ 1401], 80.00th=[ 1485], 90.00th=[ 1687], 95.00th=[ 2022], 00:21:06.074 | 99.00th=[ 2333], 99.50th=[ 2333], 99.90th=[ 2366], 99.95th=[ 2366], 00:21:06.074 | 99.99th=[ 2366] 00:21:06.074 bw ( KiB/s): min=14336, max=346112, per=2.89%, avg=114240.61, stdev=87472.49, samples=18 00:21:06.074 iops : min= 14, max= 338, avg=111.56, stdev=85.43, samples=18 00:21:06.075 lat (msec) : 100=0.09%, 250=0.27%, 500=25.44%, 750=9.72%, 1000=6.45% 00:21:06.075 lat (msec) : 2000=53.00%, >=2000=5.04% 00:21:06.075 cpu : usr=0.01%, sys=2.11%, ctx=2293, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.075 issued rwts: total=1132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701765: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=26, BW=26.8MiB/s (28.1MB/s)(283MiB/10568msec) 00:21:06.075 slat (usec): min=119, max=2091.2k, avg=37150.39, stdev=204448.67 00:21:06.075 clat (msec): min=52, max=8737, avg=4449.65, stdev=1851.83 00:21:06.075 lat (msec): min=1465, max=8749, avg=4486.80, stdev=1847.83 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 1469], 5.00th=[ 1620], 10.00th=[ 1636], 20.00th=[ 3675], 00:21:06.075 | 30.00th=[ 3775], 40.00th=[ 4077], 50.00th=[ 4111], 60.00th=[ 4866], 00:21:06.075 | 70.00th=[ 5269], 80.00th=[ 5940], 90.00th=[ 6409], 95.00th=[ 8658], 00:21:06.075 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:21:06.075 | 99.99th=[ 8792] 00:21:06.075 bw ( KiB/s): min= 4096, max=77824, per=0.80%, avg=31742.70, stdev=25911.30, samples=10 00:21:06.075 iops : min= 4, max= 76, avg=30.90, stdev=25.41, samples=10 00:21:06.075 lat (msec) : 100=0.35%, 2000=16.96%, >=2000=82.69% 00:21:06.075 cpu : usr=0.02%, sys=1.43%, ctx=680, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:21:06.075 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701766: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=58, BW=58.7MiB/s (61.6MB/s)(591MiB/10060msec) 00:21:06.075 slat (usec): min=109, max=119869, avg=16935.59, stdev=15414.28 00:21:06.075 clat (msec): min=46, max=4924, avg=1974.44, stdev=759.57 00:21:06.075 lat (msec): min=79, max=4951, avg=1991.38, stdev=760.67 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 186], 5.00th=[ 709], 10.00th=[ 1200], 20.00th=[ 1267], 00:21:06.075 | 30.00th=[ 1368], 40.00th=[ 1670], 50.00th=[ 2089], 60.00th=[ 2467], 00:21:06.075 | 70.00th=[ 2567], 80.00th=[ 2668], 90.00th=[ 2735], 95.00th=[ 2869], 00:21:06.075 | 99.00th=[ 2937], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:21:06.075 | 99.99th=[ 4933] 00:21:06.075 bw ( KiB/s): min=28672, max=112640, per=1.41%, avg=55878.24, stdev=22350.77, samples=17 00:21:06.075 iops : min= 28, max= 110, avg=54.53, stdev=21.85, samples=17 00:21:06.075 lat (msec) : 50=0.17%, 100=0.17%, 250=1.35%, 500=1.52%, 750=1.86% 00:21:06.075 lat (msec) : 1000=1.69%, 2000=41.12%, >=2000=52.12% 00:21:06.075 cpu : usr=0.05%, sys=1.67%, ctx=1772, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.075 issued rwts: total=591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701767: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=113, BW=114MiB/s (119MB/s)(1144MiB/10048msec) 00:21:06.075 slat (usec): min=48, max=2078.6k, avg=8738.38, stdev=65882.82 00:21:06.075 clat (msec): min=44, max=6813, avg=1088.47, stdev=1341.69 00:21:06.075 lat (msec): min=68, max=6829, avg=1097.21, stdev=1352.29 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 93], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 257], 00:21:06.075 | 30.00th=[ 259], 40.00th=[ 264], 50.00th=[ 321], 60.00th=[ 451], 00:21:06.075 | 70.00th=[ 827], 80.00th=[ 2534], 90.00th=[ 3306], 95.00th=[ 3641], 00:21:06.075 | 99.00th=[ 5000], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 6812], 00:21:06.075 | 99.99th=[ 6812] 00:21:06.075 bw ( KiB/s): min= 2048, max=505856, per=3.29%, avg=130102.12, stdev=146557.27, samples=16 00:21:06.075 iops : min= 2, max= 494, avg=126.94, stdev=143.14, samples=16 00:21:06.075 lat (msec) : 50=0.09%, 100=1.14%, 250=1.40%, 500=58.57%, 750=3.93% 00:21:06.075 lat (msec) : 1000=7.26%, 2000=5.77%, >=2000=21.85% 00:21:06.075 cpu : usr=0.06%, sys=2.14%, ctx=1726, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.075 issued rwts: total=1144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701768: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=4, BW=5026KiB/s (5147kB/s)(52.0MiB/10594msec) 00:21:06.075 slat (usec): min=654, max=2156.8k, avg=202494.15, stdev=599242.39 00:21:06.075 clat (msec): min=63, max=10590, avg=9366.95, stdev=2337.89 00:21:06.075 lat (msec): min=2103, max=10593, avg=9569.44, stdev=1938.08 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 64], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[ 8557], 00:21:06.075 | 30.00th=[10402], 40.00th=[10537], 50.00th=[10537], 60.00th=[10537], 00:21:06.075 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:21:06.075 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:06.075 | 99.99th=[10537] 00:21:06.075 lat (msec) : 100=1.92%, >=2000=98.08% 00:21:06.075 cpu : usr=0.00%, sys=0.55%, ctx=121, majf=0, minf=13313 00:21:06.075 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:06.075 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701769: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=110, BW=110MiB/s (116MB/s)(1113MiB/10102msec) 00:21:06.075 slat (usec): min=48, max=2094.7k, avg=8982.75, stdev=64740.52 00:21:06.075 clat (msec): min=97, max=3667, avg=1107.51, stdev=879.92 00:21:06.075 lat (msec): min=101, max=3727, avg=1116.49, stdev=883.65 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 201], 5.00th=[ 472], 10.00th=[ 651], 20.00th=[ 751], 00:21:06.075 | 30.00th=[ 760], 40.00th=[ 785], 50.00th=[ 818], 60.00th=[ 835], 00:21:06.075 | 70.00th=[ 885], 80.00th=[ 986], 90.00th=[ 3373], 95.00th=[ 3540], 00:21:06.075 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:21:06.075 | 99.99th=[ 3675] 00:21:06.075 bw ( KiB/s): min= 8192, max=200704, per=3.19%, avg=126186.50, stdev=54345.39, samples=16 00:21:06.075 iops : min= 8, max= 196, avg=123.19, stdev=53.03, samples=16 00:21:06.075 lat (msec) : 100=0.09%, 250=1.08%, 500=4.04%, 750=15.99%, 1000=59.03% 00:21:06.075 lat (msec) : 2000=8.36%, >=2000=11.41% 00:21:06.075 cpu : usr=0.06%, sys=2.00%, ctx=1129, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.075 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701770: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(364MiB/10518msec) 00:21:06.075 slat (usec): min=1634, max=2022.8k, avg=28710.24, stdev=144893.05 00:21:06.075 clat (msec): min=65, max=6077, avg=3200.69, stdev=1358.51 00:21:06.075 lat (msec): min=2058, max=6088, avg=3229.40, stdev=1348.83 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 2072], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 2165], 00:21:06.075 | 30.00th=[ 2232], 40.00th=[ 2265], 50.00th=[ 2366], 60.00th=[ 2467], 00:21:06.075 | 70.00th=[ 4178], 80.00th=[ 4799], 90.00th=[ 5470], 95.00th=[ 5671], 00:21:06.075 | 99.00th=[ 6007], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:21:06.075 | 99.99th=[ 6074] 00:21:06.075 bw ( KiB/s): min=12288, max=77824, per=1.22%, avg=48310.60, stdev=20015.48, samples=10 00:21:06.075 iops : min= 12, max= 76, avg=47.00, stdev=19.48, samples=10 00:21:06.075 lat (msec) : 100=0.27%, >=2000=99.73% 00:21:06.075 cpu : usr=0.00%, sys=1.01%, ctx=1311, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.075 issued rwts: total=364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job3: (groupid=0, jobs=1): err= 0: pid=1701771: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=39, BW=39.8MiB/s (41.7MB/s)(421MiB/10585msec) 00:21:06.075 slat (usec): min=622, max=1376.3k, avg=23777.36, stdev=69367.14 00:21:06.075 clat (msec): min=571, max=4635, avg=2900.37, stdev=813.76 00:21:06.075 lat (msec): min=620, max=4667, avg=2924.15, stdev=805.92 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 835], 5.00th=[ 1519], 10.00th=[ 1770], 20.00th=[ 2123], 00:21:06.075 | 30.00th=[ 2601], 40.00th=[ 2836], 50.00th=[ 2970], 60.00th=[ 3037], 00:21:06.075 | 70.00th=[ 3138], 80.00th=[ 3641], 90.00th=[ 4044], 95.00th=[ 4279], 00:21:06.075 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:21:06.075 | 99.99th=[ 4665] 00:21:06.075 bw ( KiB/s): min= 1996, max=94019, per=0.95%, avg=37614.38, stdev=29250.70, samples=16 00:21:06.075 iops : min= 1, max= 91, avg=36.56, stdev=28.58, samples=16 00:21:06.075 lat (msec) : 750=0.71%, 1000=0.71%, 2000=14.49%, >=2000=84.09% 00:21:06.075 cpu : usr=0.07%, sys=1.37%, ctx=1546, majf=0, minf=32769 00:21:06.075 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:21:06.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.075 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.075 issued rwts: total=421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.075 job4: (groupid=0, jobs=1): err= 0: pid=1701773: Mon Jul 15 18:13:04 2024 00:21:06.075 read: IOPS=39, BW=39.3MiB/s (41.2MB/s)(395MiB/10058msec) 00:21:06.075 slat (usec): min=127, max=2080.5k, avg=25335.60, stdev=110997.61 00:21:06.075 clat (msec): min=47, max=6189, avg=3101.03, stdev=1790.75 00:21:06.075 lat (msec): min=62, max=6258, avg=3126.37, stdev=1795.11 00:21:06.075 clat percentiles (msec): 00:21:06.075 | 1.00th=[ 101], 5.00th=[ 372], 10.00th=[ 776], 20.00th=[ 877], 00:21:06.075 | 30.00th=[ 1636], 40.00th=[ 2970], 50.00th=[ 3608], 60.00th=[ 3809], 00:21:06.075 | 70.00th=[ 3977], 80.00th=[ 4732], 90.00th=[ 5604], 95.00th=[ 5940], 00:21:06.075 | 99.00th=[ 6141], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:21:06.075 | 99.99th=[ 6208] 00:21:06.075 bw ( KiB/s): min= 2048, max=63488, per=0.92%, avg=36590.93, stdev=14622.91, samples=15 00:21:06.075 iops : min= 2, max= 62, avg=35.73, stdev=14.28, samples=15 00:21:06.076 lat (msec) : 50=0.25%, 100=0.51%, 250=2.78%, 500=2.78%, 750=2.78% 00:21:06.076 lat (msec) : 1000=16.20%, 2000=7.09%, >=2000=67.59% 00:21:06.076 cpu : usr=0.00%, sys=1.53%, ctx=1062, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.1% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.076 issued rwts: total=395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701774: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=177, BW=177MiB/s (186MB/s)(1776MiB/10026msec) 00:21:06.076 slat (usec): min=36, max=1980.8k, avg=5630.08, stdev=63105.03 00:21:06.076 clat (msec): min=21, max=4639, avg=428.66, stdev=380.07 00:21:06.076 lat (msec): min=26, max=4656, avg=434.29, stdev=393.84 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 74], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 253], 00:21:06.076 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 262], 60.00th=[ 359], 00:21:06.076 | 70.00th=[ 401], 80.00th=[ 567], 90.00th=[ 735], 95.00th=[ 1217], 00:21:06.076 | 99.00th=[ 1318], 99.50th=[ 2836], 99.90th=[ 4665], 99.95th=[ 4665], 00:21:06.076 | 99.99th=[ 4665] 00:21:06.076 bw ( KiB/s): min=34816, max=514048, per=8.29%, avg=327885.90, stdev=168942.16, samples=10 00:21:06.076 iops : min= 34, max= 502, avg=320.10, stdev=164.86, samples=10 00:21:06.076 lat (msec) : 50=0.79%, 100=0.45%, 250=1.18%, 500=76.18%, 750=13.46% 00:21:06.076 lat (msec) : 1000=1.24%, 2000=5.91%, >=2000=0.79% 00:21:06.076 cpu : usr=0.03%, sys=2.00%, ctx=2111, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.5% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.076 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701775: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=165, BW=165MiB/s (173MB/s)(1667MiB/10084msec) 00:21:06.076 slat (usec): min=42, max=2078.6k, avg=5992.63, stdev=51799.10 00:21:06.076 clat (msec): min=82, max=2675, avg=712.77, stdev=594.44 00:21:06.076 lat (msec): min=83, max=2680, avg=718.76, stdev=596.65 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 205], 5.00th=[ 363], 10.00th=[ 363], 20.00th=[ 368], 00:21:06.076 | 30.00th=[ 388], 40.00th=[ 481], 50.00th=[ 489], 60.00th=[ 518], 00:21:06.076 | 70.00th=[ 676], 80.00th=[ 961], 90.00th=[ 1062], 95.00th=[ 2601], 00:21:06.076 | 99.00th=[ 2668], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2668], 00:21:06.076 | 99.99th=[ 2668] 00:21:06.076 bw ( KiB/s): min=34816, max=349508, per=5.31%, avg=209935.20, stdev=104986.79, samples=15 00:21:06.076 iops : min= 34, max= 341, avg=204.93, stdev=102.55, samples=15 00:21:06.076 lat (msec) : 100=0.24%, 250=0.96%, 500=57.23%, 750=12.42%, 1000=14.16% 00:21:06.076 lat (msec) : 2000=7.38%, >=2000=7.62% 00:21:06.076 cpu : usr=0.15%, sys=2.49%, ctx=1564, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.076 issued rwts: total=1667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701776: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=143, BW=143MiB/s (150MB/s)(1441MiB/10055msec) 00:21:06.076 slat (usec): min=51, max=86557, avg=6953.79, stdev=12967.57 00:21:06.076 clat (msec): min=27, max=1467, avg=800.91, stdev=190.45 00:21:06.076 lat (msec): min=62, max=1505, avg=807.87, stdev=191.88 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 120], 5.00th=[ 468], 10.00th=[ 584], 20.00th=[ 701], 00:21:06.076 | 30.00th=[ 743], 40.00th=[ 768], 50.00th=[ 810], 60.00th=[ 860], 00:21:06.076 | 70.00th=[ 911], 80.00th=[ 953], 90.00th=[ 995], 95.00th=[ 1020], 00:21:06.076 | 99.00th=[ 1234], 99.50th=[ 1351], 99.90th=[ 1469], 99.95th=[ 1469], 00:21:06.076 | 99.99th=[ 1469] 00:21:06.076 bw ( KiB/s): min= 4096, max=221184, per=3.75%, avg=148539.53, stdev=46593.72, samples=17 00:21:06.076 iops : min= 4, max= 216, avg=145.00, stdev=45.52, samples=17 00:21:06.076 lat (msec) : 50=0.07%, 100=0.90%, 250=1.39%, 500=2.98%, 750=26.93% 00:21:06.076 lat (msec) : 1000=61.35%, 2000=6.38% 00:21:06.076 cpu : usr=0.05%, sys=2.54%, ctx=1365, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.076 issued rwts: total=1441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701777: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=33, BW=33.7MiB/s (35.3MB/s)(340MiB/10099msec) 00:21:06.076 slat (usec): min=35, max=2086.0k, avg=29422.55, stdev=147584.06 00:21:06.076 clat (msec): min=93, max=5746, avg=2104.47, stdev=904.32 00:21:06.076 lat (msec): min=101, max=5787, avg=2133.90, stdev=924.74 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 127], 5.00th=[ 321], 10.00th=[ 810], 20.00th=[ 1687], 00:21:06.076 | 30.00th=[ 2089], 40.00th=[ 2198], 50.00th=[ 2265], 60.00th=[ 2333], 00:21:06.076 | 70.00th=[ 2366], 80.00th=[ 2433], 90.00th=[ 2601], 95.00th=[ 2668], 00:21:06.076 | 99.00th=[ 5671], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:21:06.076 | 99.99th=[ 5738] 00:21:06.076 bw ( KiB/s): min=20480, max=110592, per=1.22%, avg=48469.33, stdev=25332.36, samples=9 00:21:06.076 iops : min= 20, max= 108, avg=47.33, stdev=24.74, samples=9 00:21:06.076 lat (msec) : 100=0.29%, 250=3.82%, 500=2.65%, 750=2.65%, 1000=2.06% 00:21:06.076 lat (msec) : 2000=12.94%, >=2000=75.59% 00:21:06.076 cpu : usr=0.02%, sys=1.10%, ctx=1220, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.5% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:06.076 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701778: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(410MiB/10095msec) 00:21:06.076 slat (usec): min=487, max=102783, avg=24393.36, stdev=25981.50 00:21:06.076 clat (msec): min=91, max=4307, avg=2851.43, stdev=1016.12 00:21:06.076 lat (msec): min=114, max=4311, avg=2875.82, stdev=1017.05 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 249], 5.00th=[ 751], 10.00th=[ 1519], 20.00th=[ 2106], 00:21:06.076 | 30.00th=[ 2433], 40.00th=[ 2601], 50.00th=[ 2802], 60.00th=[ 3138], 00:21:06.076 | 70.00th=[ 3540], 80.00th=[ 3910], 90.00th=[ 4077], 95.00th=[ 4212], 00:21:06.076 | 99.00th=[ 4279], 99.50th=[ 4279], 99.90th=[ 4329], 99.95th=[ 4329], 00:21:06.076 | 99.99th=[ 4329] 00:21:06.076 bw ( KiB/s): min=10240, max=67584, per=0.86%, avg=34078.65, stdev=16162.50, samples=17 00:21:06.076 iops : min= 10, max= 66, avg=33.12, stdev=15.72, samples=17 00:21:06.076 lat (msec) : 100=0.24%, 250=0.98%, 500=1.71%, 750=1.95%, 1000=1.71% 00:21:06.076 lat (msec) : 2000=10.98%, >=2000=82.44% 00:21:06.076 cpu : usr=0.03%, sys=1.42%, ctx=1602, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.076 issued rwts: total=410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701779: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=58, BW=58.8MiB/s (61.6MB/s)(590MiB/10038msec) 00:21:06.076 slat (usec): min=38, max=131753, avg=16959.46, stdev=21211.52 00:21:06.076 clat (msec): min=28, max=3148, avg=1902.56, stdev=758.84 00:21:06.076 lat (msec): min=43, max=3176, avg=1919.52, stdev=759.91 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 68], 5.00th=[ 550], 10.00th=[ 953], 20.00th=[ 1070], 00:21:06.076 | 30.00th=[ 1502], 40.00th=[ 1854], 50.00th=[ 2005], 60.00th=[ 2089], 00:21:06.076 | 70.00th=[ 2232], 80.00th=[ 2702], 90.00th=[ 2970], 95.00th=[ 3071], 00:21:06.076 | 99.00th=[ 3104], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:21:06.076 | 99.99th=[ 3138] 00:21:06.076 bw ( KiB/s): min=28672, max=163840, per=1.50%, avg=59177.00, stdev=36079.25, samples=16 00:21:06.076 iops : min= 28, max= 160, avg=57.75, stdev=35.24, samples=16 00:21:06.076 lat (msec) : 50=0.34%, 100=0.85%, 250=0.85%, 500=2.54%, 750=1.36% 00:21:06.076 lat (msec) : 1000=8.64%, 2000=34.41%, >=2000=51.02% 00:21:06.076 cpu : usr=0.03%, sys=1.22%, ctx=1534, majf=0, minf=32769 00:21:06.076 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:21:06.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.076 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.076 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.076 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.076 job4: (groupid=0, jobs=1): err= 0: pid=1701780: Mon Jul 15 18:13:04 2024 00:21:06.076 read: IOPS=61, BW=61.1MiB/s (64.0MB/s)(616MiB/10085msec) 00:21:06.076 slat (usec): min=54, max=1775.9k, avg=16236.92, stdev=99764.21 00:21:06.076 clat (msec): min=78, max=5438, avg=1739.97, stdev=1531.57 00:21:06.076 lat (msec): min=127, max=5438, avg=1756.21, stdev=1536.26 00:21:06.076 clat percentiles (msec): 00:21:06.076 | 1.00th=[ 257], 5.00th=[ 642], 10.00th=[ 659], 20.00th=[ 709], 00:21:06.076 | 30.00th=[ 751], 40.00th=[ 768], 50.00th=[ 802], 60.00th=[ 986], 00:21:06.076 | 70.00th=[ 2072], 80.00th=[ 3406], 90.00th=[ 4597], 95.00th=[ 5067], 00:21:06.076 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5470], 99.95th=[ 5470], 00:21:06.076 | 99.99th=[ 5470] 00:21:06.076 bw ( KiB/s): min= 6144, max=194560, per=1.95%, avg=77031.85, stdev=66951.16, samples=13 00:21:06.076 iops : min= 6, max= 190, avg=75.15, stdev=65.44, samples=13 00:21:06.076 lat (msec) : 100=0.16%, 250=0.65%, 500=1.79%, 750=28.08%, 1000=29.55% 00:21:06.076 lat (msec) : 2000=9.58%, >=2000=30.19% 00:21:06.076 cpu : usr=0.04%, sys=1.76%, ctx=956, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.077 issued rwts: total=616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job4: (groupid=0, jobs=1): err= 0: pid=1701781: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=45, BW=45.3MiB/s (47.5MB/s)(458MiB/10105msec) 00:21:06.077 slat (usec): min=117, max=1692.0k, avg=21856.64, stdev=81695.26 00:21:06.077 clat (msec): min=92, max=4700, avg=2507.55, stdev=1145.82 00:21:06.077 lat (msec): min=111, max=4701, avg=2529.41, stdev=1147.37 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 174], 5.00th=[ 718], 10.00th=[ 1099], 20.00th=[ 1703], 00:21:06.077 | 30.00th=[ 2022], 40.00th=[ 2106], 50.00th=[ 2198], 60.00th=[ 2400], 00:21:06.077 | 70.00th=[ 2635], 80.00th=[ 3943], 90.00th=[ 4245], 95.00th=[ 4463], 00:21:06.077 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:21:06.077 | 99.99th=[ 4732] 00:21:06.077 bw ( KiB/s): min= 4096, max=122880, per=1.14%, avg=45104.27, stdev=32290.97, samples=15 00:21:06.077 iops : min= 4, max= 120, avg=43.87, stdev=31.50, samples=15 00:21:06.077 lat (msec) : 100=0.22%, 250=1.31%, 500=1.31%, 750=2.62%, 1000=3.49% 00:21:06.077 lat (msec) : 2000=17.47%, >=2000=73.58% 00:21:06.077 cpu : usr=0.04%, sys=1.26%, ctx=1289, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.2% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.077 issued rwts: total=458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job4: (groupid=0, jobs=1): err= 0: pid=1701782: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=70, BW=70.7MiB/s (74.2MB/s)(710MiB/10036msec) 00:21:06.077 slat (usec): min=43, max=2074.0k, avg=14083.23, stdev=103133.45 00:21:06.077 clat (msec): min=33, max=4576, avg=1472.05, stdev=1319.11 00:21:06.077 lat (msec): min=54, max=4607, avg=1486.13, stdev=1324.19 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 83], 5.00th=[ 550], 10.00th=[ 567], 20.00th=[ 634], 00:21:06.077 | 30.00th=[ 726], 40.00th=[ 743], 50.00th=[ 802], 60.00th=[ 953], 00:21:06.077 | 70.00th=[ 1284], 80.00th=[ 2500], 90.00th=[ 4044], 95.00th=[ 4396], 00:21:06.077 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:21:06.077 | 99.99th=[ 4597] 00:21:06.077 bw ( KiB/s): min= 8192, max=223232, per=2.39%, avg=94555.08, stdev=72680.27, samples=12 00:21:06.077 iops : min= 8, max= 218, avg=92.33, stdev=70.98, samples=12 00:21:06.077 lat (msec) : 50=0.14%, 100=1.41%, 250=1.41%, 500=0.85%, 750=40.28% 00:21:06.077 lat (msec) : 1000=17.75%, 2000=17.04%, >=2000=21.13% 00:21:06.077 cpu : usr=0.04%, sys=1.32%, ctx=945, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.077 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job4: (groupid=0, jobs=1): err= 0: pid=1701783: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=65, BW=65.8MiB/s (69.0MB/s)(664MiB/10085msec) 00:21:06.077 slat (usec): min=44, max=108047, avg=15056.35, stdev=20142.93 00:21:06.077 clat (msec): min=83, max=2870, avg=1769.15, stdev=697.98 00:21:06.077 lat (msec): min=100, max=2873, avg=1784.20, stdev=700.11 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 234], 5.00th=[ 735], 10.00th=[ 885], 20.00th=[ 1045], 00:21:06.077 | 30.00th=[ 1217], 40.00th=[ 1536], 50.00th=[ 1770], 60.00th=[ 2072], 00:21:06.077 | 70.00th=[ 2400], 80.00th=[ 2500], 90.00th=[ 2635], 95.00th=[ 2702], 00:21:06.077 | 99.00th=[ 2836], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:21:06.077 | 99.99th=[ 2869] 00:21:06.077 bw ( KiB/s): min=32768, max=169984, per=1.63%, avg=64648.59, stdev=34499.86, samples=17 00:21:06.077 iops : min= 32, max= 166, avg=63.06, stdev=33.73, samples=17 00:21:06.077 lat (msec) : 100=0.15%, 250=0.90%, 500=1.66%, 750=2.71%, 1000=9.49% 00:21:06.077 lat (msec) : 2000=43.83%, >=2000=41.27% 00:21:06.077 cpu : usr=0.05%, sys=1.45%, ctx=1510, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.077 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job4: (groupid=0, jobs=1): err= 0: pid=1701784: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=50, BW=50.2MiB/s (52.7MB/s)(505MiB/10055msec) 00:21:06.077 slat (usec): min=36, max=2175.0k, avg=19801.75, stdev=98301.44 00:21:06.077 clat (msec): min=52, max=4100, avg=2335.68, stdev=1001.66 00:21:06.077 lat (msec): min=85, max=4144, avg=2355.48, stdev=1001.03 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 174], 5.00th=[ 726], 10.00th=[ 1250], 20.00th=[ 1720], 00:21:06.077 | 30.00th=[ 1938], 40.00th=[ 1972], 50.00th=[ 2089], 60.00th=[ 2232], 00:21:06.077 | 70.00th=[ 2366], 80.00th=[ 3742], 90.00th=[ 3910], 95.00th=[ 3977], 00:21:06.077 | 99.00th=[ 4044], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4111], 00:21:06.077 | 99.99th=[ 4111] 00:21:06.077 bw ( KiB/s): min=24576, max=75776, per=1.40%, avg=55276.79, stdev=16920.01, samples=14 00:21:06.077 iops : min= 24, max= 74, avg=53.93, stdev=16.56, samples=14 00:21:06.077 lat (msec) : 100=0.79%, 250=0.99%, 500=1.78%, 750=1.78%, 1000=3.76% 00:21:06.077 lat (msec) : 2000=33.27%, >=2000=57.62% 00:21:06.077 cpu : usr=0.04%, sys=1.30%, ctx=1260, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.077 issued rwts: total=505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job4: (groupid=0, jobs=1): err= 0: pid=1701785: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(355MiB/10096msec) 00:21:06.077 slat (usec): min=39, max=2039.8k, avg=28167.51, stdev=142928.63 00:21:06.077 clat (msec): min=93, max=5780, avg=2402.25, stdev=1264.40 00:21:06.077 lat (msec): min=104, max=5793, avg=2430.42, stdev=1274.61 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 136], 5.00th=[ 376], 10.00th=[ 844], 20.00th=[ 1603], 00:21:06.077 | 30.00th=[ 2056], 40.00th=[ 2232], 50.00th=[ 2333], 60.00th=[ 2567], 00:21:06.077 | 70.00th=[ 2702], 80.00th=[ 2769], 90.00th=[ 4111], 95.00th=[ 5537], 00:21:06.077 | 99.00th=[ 5738], 99.50th=[ 5738], 99.90th=[ 5805], 99.95th=[ 5805], 00:21:06.077 | 99.99th=[ 5805] 00:21:06.077 bw ( KiB/s): min=20480, max=102400, per=1.18%, avg=46694.40, stdev=23821.06, samples=10 00:21:06.077 iops : min= 20, max= 100, avg=45.60, stdev=23.26, samples=10 00:21:06.077 lat (msec) : 100=0.28%, 250=3.10%, 500=2.82%, 750=2.82%, 1000=4.51% 00:21:06.077 lat (msec) : 2000=14.37%, >=2000=72.11% 00:21:06.077 cpu : usr=0.02%, sys=1.32%, ctx=1214, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.5%, 32=9.0%, >=64=82.3% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:06.077 issued rwts: total=355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job5: (groupid=0, jobs=1): err= 0: pid=1701786: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=105, BW=105MiB/s (110MB/s)(1056MiB/10032msec) 00:21:06.077 slat (usec): min=47, max=641410, avg=9464.06, stdev=24769.30 00:21:06.077 clat (msec): min=30, max=3241, avg=1141.87, stdev=421.46 00:21:06.077 lat (msec): min=33, max=3324, avg=1151.34, stdev=423.19 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 136], 5.00th=[ 718], 10.00th=[ 768], 20.00th=[ 810], 00:21:06.077 | 30.00th=[ 835], 40.00th=[ 877], 50.00th=[ 978], 60.00th=[ 1284], 00:21:06.077 | 70.00th=[ 1351], 80.00th=[ 1519], 90.00th=[ 1787], 95.00th=[ 1854], 00:21:06.077 | 99.00th=[ 1955], 99.50th=[ 1989], 99.90th=[ 3205], 99.95th=[ 3239], 00:21:06.077 | 99.99th=[ 3239] 00:21:06.077 bw ( KiB/s): min= 4096, max=172032, per=2.67%, avg=105665.78, stdev=48880.60, samples=18 00:21:06.077 iops : min= 4, max= 168, avg=103.17, stdev=47.76, samples=18 00:21:06.077 lat (msec) : 50=0.28%, 100=0.66%, 250=0.76%, 500=0.66%, 750=6.25% 00:21:06.077 lat (msec) : 1000=42.23%, 2000=48.77%, >=2000=0.38% 00:21:06.077 cpu : usr=0.10%, sys=1.86%, ctx=1633, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.077 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job5: (groupid=0, jobs=1): err= 0: pid=1701787: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(514MiB/10018msec) 00:21:06.077 slat (usec): min=38, max=2053.0k, avg=19456.76, stdev=122760.91 00:21:06.077 clat (msec): min=14, max=5171, avg=2372.17, stdev=1570.97 00:21:06.077 lat (msec): min=17, max=5175, avg=2391.63, stdev=1574.91 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 31], 5.00th=[ 86], 10.00th=[ 435], 20.00th=[ 844], 00:21:06.077 | 30.00th=[ 902], 40.00th=[ 1569], 50.00th=[ 2198], 60.00th=[ 2970], 00:21:06.077 | 70.00th=[ 3071], 80.00th=[ 3876], 90.00th=[ 4799], 95.00th=[ 5067], 00:21:06.077 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:21:06.077 | 99.99th=[ 5201] 00:21:06.077 bw ( KiB/s): min= 2048, max=145408, per=1.31%, avg=51964.38, stdev=40039.91, samples=13 00:21:06.077 iops : min= 2, max= 142, avg=50.62, stdev=39.03, samples=13 00:21:06.077 lat (msec) : 20=0.39%, 50=1.56%, 100=3.70%, 250=0.97%, 500=4.47% 00:21:06.077 lat (msec) : 750=5.45%, 1000=16.93%, 2000=10.70%, >=2000=55.84% 00:21:06.077 cpu : usr=0.01%, sys=1.62%, ctx=1156, majf=0, minf=32769 00:21:06.077 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:21:06.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.077 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.077 issued rwts: total=514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.077 job5: (groupid=0, jobs=1): err= 0: pid=1701788: Mon Jul 15 18:13:04 2024 00:21:06.077 read: IOPS=89, BW=89.5MiB/s (93.8MB/s)(901MiB/10071msec) 00:21:06.077 slat (usec): min=69, max=1730.6k, avg=11095.37, stdev=74407.10 00:21:06.077 clat (msec): min=69, max=3948, avg=1250.84, stdev=1135.32 00:21:06.077 lat (msec): min=79, max=3974, avg=1261.94, stdev=1141.27 00:21:06.077 clat percentiles (msec): 00:21:06.077 | 1.00th=[ 159], 5.00th=[ 309], 10.00th=[ 363], 20.00th=[ 397], 00:21:06.078 | 30.00th=[ 468], 40.00th=[ 531], 50.00th=[ 609], 60.00th=[ 919], 00:21:06.078 | 70.00th=[ 1435], 80.00th=[ 2400], 90.00th=[ 3272], 95.00th=[ 3742], 00:21:06.078 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:21:06.078 | 99.99th=[ 3943] 00:21:06.078 bw ( KiB/s): min= 2048, max=391168, per=2.67%, avg=105669.60, stdev=108994.17, samples=15 00:21:06.078 iops : min= 2, max= 382, avg=103.07, stdev=106.54, samples=15 00:21:06.078 lat (msec) : 100=0.44%, 250=1.33%, 500=34.63%, 750=20.53%, 1000=5.88% 00:21:06.078 lat (msec) : 2000=10.88%, >=2000=26.30% 00:21:06.078 cpu : usr=0.04%, sys=1.72%, ctx=2976, majf=0, minf=32769 00:21:06.078 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:21:06.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.078 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.078 issued rwts: total=901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.078 job5: (groupid=0, jobs=1): err= 0: pid=1701789: Mon Jul 15 18:13:04 2024 00:21:06.078 read: IOPS=47, BW=47.4MiB/s (49.7MB/s)(478MiB/10077msec) 00:21:06.078 slat (usec): min=61, max=2123.0k, avg=20922.80, stdev=141179.92 00:21:06.078 clat (msec): min=73, max=7939, avg=2565.46, stdev=2501.73 00:21:06.078 lat (msec): min=77, max=7952, avg=2586.39, stdev=2512.07 00:21:06.078 clat percentiles (msec): 00:21:06.078 | 1.00th=[ 90], 5.00th=[ 222], 10.00th=[ 275], 20.00th=[ 558], 00:21:06.078 | 30.00th=[ 726], 40.00th=[ 793], 50.00th=[ 936], 60.00th=[ 3406], 00:21:06.078 | 70.00th=[ 4010], 80.00th=[ 4732], 90.00th=[ 7148], 95.00th=[ 7886], 00:21:06.078 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:21:06.078 | 99.99th=[ 7953] 00:21:06.078 bw ( KiB/s): min= 6144, max=198656, per=1.51%, avg=59904.00, stdev=64344.81, samples=12 00:21:06.078 iops : min= 6, max= 194, avg=58.50, stdev=62.84, samples=12 00:21:06.078 lat (msec) : 100=1.46%, 250=6.69%, 500=9.21%, 750=20.71%, 1000=16.53% 00:21:06.078 lat (msec) : 2000=4.39%, >=2000=41.00% 00:21:06.078 cpu : usr=0.00%, sys=1.43%, ctx=1502, majf=0, minf=32769 00:21:06.078 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.8% 00:21:06.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.078 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:06.078 issued rwts: total=478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.078 job5: (groupid=0, jobs=1): err= 0: pid=1701790: Mon Jul 15 18:13:04 2024 00:21:06.078 read: IOPS=69, BW=69.8MiB/s (73.2MB/s)(702MiB/10058msec) 00:21:06.078 slat (usec): min=39, max=2057.3k, avg=14277.71, stdev=102868.56 00:21:06.078 clat (msec): min=32, max=5259, avg=1507.27, stdev=1589.57 00:21:06.078 lat (msec): min=66, max=5266, avg=1521.54, stdev=1596.60 00:21:06.078 clat percentiles (msec): 00:21:06.078 | 1.00th=[ 148], 5.00th=[ 401], 10.00th=[ 435], 20.00th=[ 468], 00:21:06.078 | 30.00th=[ 493], 40.00th=[ 542], 50.00th=[ 609], 60.00th=[ 701], 00:21:06.078 | 70.00th=[ 1519], 80.00th=[ 2400], 90.00th=[ 4732], 95.00th=[ 5000], 00:21:06.078 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5269], 99.95th=[ 5269], 00:21:06.078 | 99.99th=[ 5269] 00:21:06.078 bw ( KiB/s): min=10240, max=268288, per=2.29%, avg=90427.08, stdev=95826.00, samples=13 00:21:06.078 iops : min= 10, max= 262, avg=88.31, stdev=93.58, samples=13 00:21:06.078 lat (msec) : 50=0.14%, 100=0.43%, 250=2.56%, 500=29.77%, 750=27.92% 00:21:06.078 lat (msec) : 1000=2.42%, 2000=10.40%, >=2000=26.35% 00:21:06.078 cpu : usr=0.00%, sys=1.23%, ctx=2131, majf=0, minf=32769 00:21:06.078 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=91.0% 00:21:06.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.078 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:06.078 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.078 job5: (groupid=0, jobs=1): err= 0: pid=1701791: Mon Jul 15 18:13:04 2024 00:21:06.078 read: IOPS=123, BW=123MiB/s (129MB/s)(1233MiB/10015msec) 00:21:06.078 slat (usec): min=40, max=2078.1k, avg=8107.77, stdev=76812.12 00:21:06.078 clat (msec): min=13, max=4670, avg=829.34, stdev=1184.50 00:21:06.078 lat (msec): min=17, max=4674, avg=837.45, stdev=1190.60 00:21:06.078 clat percentiles (msec): 00:21:06.078 | 1.00th=[ 54], 5.00th=[ 197], 10.00th=[ 266], 20.00th=[ 309], 00:21:06.078 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 405], 00:21:06.078 | 70.00th=[ 468], 80.00th=[ 592], 90.00th=[ 3004], 95.00th=[ 4396], 00:21:06.078 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4665], 99.95th=[ 4665], 00:21:06.078 | 99.99th=[ 4665] 00:21:06.078 bw ( KiB/s): min=14336, max=385024, per=4.40%, avg=174237.54, stdev=146341.18, samples=13 00:21:06.078 iops : min= 14, max= 376, avg=170.15, stdev=142.91, samples=13 00:21:06.078 lat (msec) : 20=0.24%, 50=0.65%, 100=1.95%, 250=2.92%, 500=71.78% 00:21:06.078 lat (msec) : 750=5.92%, 1000=1.30%, 2000=3.16%, >=2000=12.08% 00:21:06.078 cpu : usr=0.06%, sys=1.63%, ctx=4356, majf=0, minf=32769 00:21:06.078 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:21:06.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.078 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.078 issued rwts: total=1233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.078 job5: (groupid=0, jobs=1): err= 0: pid=1701792: Mon Jul 15 18:13:04 2024 00:21:06.078 read: IOPS=93, BW=93.5MiB/s (98.0MB/s)(943MiB/10086msec) 00:21:06.078 slat (usec): min=46, max=2059.0k, avg=10623.79, stdev=86230.75 00:21:06.078 clat (msec): min=63, max=4468, avg=1124.98, stdev=1252.63 00:21:06.078 lat (msec): min=111, max=4479, avg=1135.61, stdev=1259.89 00:21:06.078 clat percentiles (msec): 00:21:06.078 | 1.00th=[ 155], 5.00th=[ 271], 10.00th=[ 288], 20.00th=[ 393], 00:21:06.078 | 30.00th=[ 414], 40.00th=[ 418], 50.00th=[ 443], 60.00th=[ 527], 00:21:06.078 | 70.00th=[ 1020], 80.00th=[ 1804], 90.00th=[ 3775], 95.00th=[ 4178], 00:21:06.078 | 99.00th=[ 4396], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:21:06.078 | 99.99th=[ 4463] 00:21:06.078 bw ( KiB/s): min=12288, max=393216, per=3.24%, avg=128319.15, stdev=120373.83, samples=13 00:21:06.078 iops : min= 12, max= 384, avg=125.15, stdev=117.56, samples=13 00:21:06.078 lat (msec) : 100=0.11%, 250=2.01%, 500=55.57%, 750=6.36%, 1000=5.94% 00:21:06.078 lat (msec) : 2000=11.88%, >=2000=18.13% 00:21:06.078 cpu : usr=0.01%, sys=1.33%, ctx=2851, majf=0, minf=32769 00:21:06.078 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:21:06.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.078 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.078 issued rwts: total=943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.078 job5: (groupid=0, jobs=1): err= 0: pid=1701793: Mon Jul 15 18:13:04 2024 00:21:06.078 read: IOPS=31, BW=31.3MiB/s (32.8MB/s)(315MiB/10065msec) 00:21:06.078 slat (usec): min=111, max=2087.0k, avg=31745.76, stdev=179298.44 00:21:06.078 clat (msec): min=63, max=5742, avg=2152.79, stdev=1312.43 00:21:06.078 lat (msec): min=99, max=5790, avg=2184.54, stdev=1329.24 00:21:06.078 clat percentiles (msec): 00:21:06.078 | 1.00th=[ 140], 5.00th=[ 397], 10.00th=[ 485], 20.00th=[ 1028], 00:21:06.078 | 30.00th=[ 1133], 40.00th=[ 1284], 50.00th=[ 1687], 60.00th=[ 3205], 00:21:06.078 | 70.00th=[ 3406], 80.00th=[ 3540], 90.00th=[ 3775], 95.00th=[ 3842], 00:21:06.078 | 99.00th=[ 3943], 99.50th=[ 5671], 99.90th=[ 5738], 99.95th=[ 5738], 00:21:06.078 | 99.99th=[ 5738] 00:21:06.078 bw ( KiB/s): min=10240, max=126976, per=1.39%, avg=55003.43, stdev=42491.74, samples=7 00:21:06.078 iops : min= 10, max= 124, avg=53.71, stdev=41.50, samples=7 00:21:06.078 lat (msec) : 100=0.63%, 250=1.59%, 500=9.21%, 750=3.81%, 1000=2.86% 00:21:06.078 lat (msec) : 2000=38.73%, >=2000=43.17% 00:21:06.078 cpu : usr=0.01%, sys=0.81%, ctx=1263, majf=0, minf=32769 00:21:06.078 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.5%, 16=5.1%, 32=10.2%, >=64=80.0% 00:21:06.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.078 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:06.078 issued rwts: total=315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.078 job5: (groupid=0, jobs=1): err= 0: pid=1701794: Mon Jul 15 18:13:04 2024 00:21:06.078 read: IOPS=79, BW=79.6MiB/s (83.4MB/s)(797MiB/10015msec) 00:21:06.078 slat (usec): min=44, max=2136.4k, avg=12544.41, stdev=105694.24 00:21:06.078 clat (msec): min=13, max=4483, avg=1542.48, stdev=1527.91 00:21:06.078 lat (msec): min=15, max=4503, avg=1555.02, stdev=1534.87 00:21:06.078 clat percentiles (msec): 00:21:06.078 | 1.00th=[ 27], 5.00th=[ 218], 10.00th=[ 313], 20.00th=[ 460], 00:21:06.078 | 30.00th=[ 498], 40.00th=[ 514], 50.00th=[ 523], 60.00th=[ 885], 00:21:06.078 | 70.00th=[ 2668], 80.00th=[ 3708], 90.00th=[ 4010], 95.00th=[ 4144], 00:21:06.078 | 99.00th=[ 4396], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:21:06.078 | 99.99th=[ 4463] 00:21:06.078 bw ( KiB/s): min= 6144, max=283190, per=2.67%, avg=105594.31, stdev=95561.92, samples=13 00:21:06.078 iops : min= 6, max= 276, avg=103.08, stdev=93.24, samples=13 00:21:06.078 lat (msec) : 20=0.38%, 50=1.88%, 100=1.51%, 250=2.13%, 500=26.35% 00:21:06.078 lat (msec) : 750=26.73%, 1000=3.01%, 2000=5.40%, >=2000=32.62% 00:21:06.079 cpu : usr=0.07%, sys=1.36%, ctx=1989, majf=0, minf=32769 00:21:06.079 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:21:06.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.079 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.079 issued rwts: total=797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.079 job5: (groupid=0, jobs=1): err= 0: pid=1701795: Mon Jul 15 18:13:04 2024 00:21:06.079 read: IOPS=208, BW=208MiB/s (218MB/s)(2195MiB/10548msec) 00:21:06.079 slat (usec): min=46, max=2029.2k, avg=4768.98, stdev=60763.83 00:21:06.079 clat (msec): min=68, max=2723, avg=590.41, stdev=676.40 00:21:06.079 lat (msec): min=166, max=2725, avg=595.18, stdev=678.37 00:21:06.079 clat percentiles (msec): 00:21:06.079 | 1.00th=[ 249], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 255], 00:21:06.079 | 30.00th=[ 259], 40.00th=[ 264], 50.00th=[ 384], 60.00th=[ 388], 00:21:06.079 | 70.00th=[ 422], 80.00th=[ 514], 90.00th=[ 2232], 95.00th=[ 2433], 00:21:06.079 | 99.00th=[ 2702], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:21:06.079 | 99.99th=[ 2735] 00:21:06.079 bw ( KiB/s): min= 2048, max=509952, per=8.23%, avg=325632.00, stdev=145504.12, samples=13 00:21:06.079 iops : min= 2, max= 498, avg=318.00, stdev=142.09, samples=13 00:21:06.079 lat (msec) : 100=0.05%, 250=2.69%, 500=73.26%, 750=12.44%, >=2000=11.57% 00:21:06.079 cpu : usr=0.13%, sys=2.65%, ctx=2500, majf=0, minf=32769 00:21:06.079 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:21:06.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.079 issued rwts: total=2195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.079 job5: (groupid=0, jobs=1): err= 0: pid=1701796: Mon Jul 15 18:13:04 2024 00:21:06.079 read: IOPS=25, BW=25.3MiB/s (26.6MB/s)(254MiB/10029msec) 00:21:06.079 slat (usec): min=42, max=2096.1k, avg=39367.87, stdev=212135.19 00:21:06.079 clat (msec): min=28, max=6600, avg=2542.88, stdev=1895.50 00:21:06.079 lat (msec): min=36, max=8389, avg=2582.25, stdev=1930.47 00:21:06.079 clat percentiles (msec): 00:21:06.079 | 1.00th=[ 39], 5.00th=[ 89], 10.00th=[ 284], 20.00th=[ 584], 00:21:06.079 | 30.00th=[ 852], 40.00th=[ 1217], 50.00th=[ 1620], 60.00th=[ 4279], 00:21:06.079 | 70.00th=[ 4396], 80.00th=[ 4463], 90.00th=[ 4463], 95.00th=[ 4597], 00:21:06.079 | 99.00th=[ 6477], 99.50th=[ 6544], 99.90th=[ 6611], 99.95th=[ 6611], 00:21:06.079 | 99.99th=[ 6611] 00:21:06.079 bw ( KiB/s): min=22528, max=98304, per=1.64%, avg=65024.00, stdev=38182.07, samples=4 00:21:06.079 iops : min= 22, max= 96, avg=63.50, stdev=37.29, samples=4 00:21:06.079 lat (msec) : 50=4.33%, 100=1.97%, 250=3.54%, 500=6.69%, 750=7.87% 00:21:06.079 lat (msec) : 1000=12.20%, 2000=15.35%, >=2000=48.03% 00:21:06.079 cpu : usr=0.02%, sys=0.75%, ctx=1012, majf=0, minf=32769 00:21:06.079 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.6%, >=64=75.2% 00:21:06.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.079 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:21:06.079 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.079 job5: (groupid=0, jobs=1): err= 0: pid=1701797: Mon Jul 15 18:13:04 2024 00:21:06.079 read: IOPS=33, BW=33.4MiB/s (35.1MB/s)(336MiB/10046msec) 00:21:06.079 slat (usec): min=88, max=2076.2k, avg=29759.91, stdev=186176.42 00:21:06.079 clat (msec): min=45, max=5757, avg=2106.72, stdev=1606.32 00:21:06.079 lat (msec): min=46, max=5813, avg=2136.48, stdev=1623.18 00:21:06.079 clat percentiles (msec): 00:21:06.079 | 1.00th=[ 51], 5.00th=[ 78], 10.00th=[ 232], 20.00th=[ 584], 00:21:06.079 | 30.00th=[ 835], 40.00th=[ 1133], 50.00th=[ 1200], 60.00th=[ 3440], 00:21:06.079 | 70.00th=[ 3742], 80.00th=[ 3910], 90.00th=[ 4077], 95.00th=[ 4245], 00:21:06.079 | 99.00th=[ 4329], 99.50th=[ 5738], 99.90th=[ 5738], 99.95th=[ 5738], 00:21:06.079 | 99.99th=[ 5738] 00:21:06.079 bw ( KiB/s): min=18432, max=122880, per=1.80%, avg=71338.67, stdev=53553.01, samples=6 00:21:06.079 iops : min= 18, max= 120, avg=69.67, stdev=52.30, samples=6 00:21:06.079 lat (msec) : 50=1.19%, 100=4.46%, 250=5.95%, 500=5.36%, 750=11.61% 00:21:06.079 lat (msec) : 1000=5.36%, 2000=22.32%, >=2000=43.75% 00:21:06.079 cpu : usr=0.01%, sys=0.84%, ctx=1440, majf=0, minf=32769 00:21:06.079 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.5%, >=64=81.2% 00:21:06.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.079 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:06.079 issued rwts: total=336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.079 job5: (groupid=0, jobs=1): err= 0: pid=1701798: Mon Jul 15 18:13:04 2024 00:21:06.079 read: IOPS=136, BW=136MiB/s (143MB/s)(1374MiB/10086msec) 00:21:06.079 slat (usec): min=47, max=575577, avg=7284.05, stdev=20647.47 00:21:06.079 clat (msec): min=67, max=2118, avg=894.72, stdev=361.85 00:21:06.079 lat (msec): min=89, max=2146, avg=902.00, stdev=363.61 00:21:06.079 clat percentiles (msec): 00:21:06.079 | 1.00th=[ 117], 5.00th=[ 558], 10.00th=[ 609], 20.00th=[ 676], 00:21:06.079 | 30.00th=[ 726], 40.00th=[ 751], 50.00th=[ 802], 60.00th=[ 902], 00:21:06.079 | 70.00th=[ 944], 80.00th=[ 995], 90.00th=[ 1469], 95.00th=[ 1636], 00:21:06.079 | 99.00th=[ 2089], 99.50th=[ 2106], 99.90th=[ 2123], 99.95th=[ 2123], 00:21:06.079 | 99.99th=[ 2123] 00:21:06.079 bw ( KiB/s): min=47104, max=233472, per=3.39%, avg=134261.53, stdev=51153.82, samples=19 00:21:06.079 iops : min= 46, max= 228, avg=131.00, stdev=49.94, samples=19 00:21:06.079 lat (msec) : 100=0.44%, 250=2.04%, 500=1.75%, 750=35.44%, 1000=41.63% 00:21:06.079 lat (msec) : 2000=16.89%, >=2000=1.82% 00:21:06.079 cpu : usr=0.12%, sys=2.60%, ctx=1540, majf=0, minf=32769 00:21:06.079 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:21:06.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.079 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.079 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.079 00:21:06.079 Run status group 0 (all jobs): 00:21:06.079 READ: bw=3864MiB/s (4052MB/s), 1269KiB/s-208MiB/s (1299kB/s-218MB/s), io=40.1GiB (43.1GB), run=10015-10631msec 00:21:06.079 00:21:06.079 Disk stats (read/write): 00:21:06.079 nvme0n1: ios=30511/0, merge=0/0, ticks=5985197/0, in_queue=5985197, util=98.11% 00:21:06.079 nvme1n1: ios=31567/0, merge=0/0, ticks=5203526/0, in_queue=5203526, util=98.33% 00:21:06.079 nvme2n1: ios=46139/0, merge=0/0, ticks=7571347/0, in_queue=7571347, util=98.54% 00:21:06.079 nvme3n1: ios=51306/0, merge=0/0, ticks=5661996/0, in_queue=5661996, util=98.70% 00:21:06.079 nvme4n1: ios=77923/0, merge=0/0, ticks=7689855/0, in_queue=7689855, util=98.49% 00:21:06.079 nvme5n1: ios=88658/0, merge=0/0, ticks=6211719/0, in_queue=6211719, util=99.13% 00:21:06.079 18:13:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:21:06.079 18:13:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:21:06.079 18:13:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:06.079 18:13:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:21:06.079 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:06.079 18:13:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:07.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:07.016 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:21:07.016 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:07.016 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:07.016 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:21:07.016 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:21:07.016 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:07.300 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:07.300 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.300 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.301 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:07.301 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.301 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:07.301 18:13:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:08.234 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:08.234 18:13:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:09.198 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:09.198 18:13:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:10.130 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:10.130 18:13:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:11.064 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.064 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:11.064 rmmod nvme_rdma 00:21:11.064 rmmod nvme_fabrics 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 1700115 ']' 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 1700115 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # '[' -z 1700115 ']' 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # kill -0 1700115 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # uname 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1700115 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1700115' 00:21:11.323 killing process with pid 1700115 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # kill 1700115 00:21:11.323 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # wait 1700115 00:21:11.582 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.582 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:11.582 00:21:11.582 real 0m33.625s 00:21:11.582 user 1m51.435s 00:21:11.582 sys 0m18.087s 00:21:11.582 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:11.582 18:13:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:11.582 ************************************ 00:21:11.582 END TEST nvmf_srq_overwhelm 00:21:11.582 ************************************ 00:21:11.582 18:13:11 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:11.582 18:13:11 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:11.582 18:13:11 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:11.582 18:13:11 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.582 18:13:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:11.582 ************************************ 00:21:11.582 START TEST nvmf_shutdown 00:21:11.582 ************************************ 00:21:11.582 18:13:11 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:11.841 * Looking for test storage... 00:21:11.841 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:11.842 ************************************ 00:21:11.842 START TEST nvmf_shutdown_tc1 00:21:11.842 ************************************ 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:11.842 18:13:12 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:19.962 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:19.962 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:19.963 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:19.963 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:19.963 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:19.963 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:19.963 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:19.963 altname enp217s0f0np0 00:21:19.963 altname ens818f0np0 00:21:19.963 inet 192.168.100.8/24 scope global mlx_0_0 00:21:19.963 valid_lft forever preferred_lft forever 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:19.963 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:19.963 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:19.963 altname enp217s0f1np1 00:21:19.963 altname ens818f1np1 00:21:19.963 inet 192.168.100.9/24 scope global mlx_0_1 00:21:19.963 valid_lft forever preferred_lft forever 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:19.963 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:19.964 192.168.100.9' 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:19.964 192.168.100.9' 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:21:19.964 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:20.223 192.168.100.9' 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1708536 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1708536 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1708536 ']' 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.223 18:13:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.223 [2024-07-15 18:13:20.451711] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:20.223 [2024-07-15 18:13:20.451762] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.223 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.223 [2024-07-15 18:13:20.537526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.223 [2024-07-15 18:13:20.609658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.223 [2024-07-15 18:13:20.609700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.223 [2024-07-15 18:13:20.609709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.223 [2024-07-15 18:13:20.609717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.223 [2024-07-15 18:13:20.609741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.223 [2024-07-15 18:13:20.609852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.223 [2024-07-15 18:13:20.609943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.223 [2024-07-15 18:13:20.610053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.223 [2024-07-15 18:13:20.610054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.160 [2024-07-15 18:13:21.337735] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fce0d0/0x1fd25c0) succeed. 00:21:21.160 [2024-07-15 18:13:21.347567] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fcf710/0x2013c50) succeed. 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.160 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.161 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.161 Malloc1 00:21:21.420 [2024-07-15 18:13:21.576131] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:21.420 Malloc2 00:21:21.420 Malloc3 00:21:21.420 Malloc4 00:21:21.420 Malloc5 00:21:21.420 Malloc6 00:21:21.678 Malloc7 00:21:21.678 Malloc8 00:21:21.678 Malloc9 00:21:21.678 Malloc10 00:21:21.678 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.678 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:21.678 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.678 18:13:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1708861 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1708861 /var/tmp/bdevperf.sock 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1708861 ']' 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.678 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 [2024-07-15 18:13:22.066509] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:21.679 [2024-07-15 18:13:22.066562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.679 { 00:21:21.679 "params": { 00:21:21.679 "name": "Nvme$subsystem", 00:21:21.679 "trtype": "$TEST_TRANSPORT", 00:21:21.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.679 "adrfam": "ipv4", 00:21:21.679 "trsvcid": "$NVMF_PORT", 00:21:21.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.679 "hdgst": ${hdgst:-false}, 00:21:21.679 "ddgst": ${ddgst:-false} 00:21:21.679 }, 00:21:21.679 "method": "bdev_nvme_attach_controller" 00:21:21.679 } 00:21:21.679 EOF 00:21:21.679 )") 00:21:21.679 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.984 { 00:21:21.984 "params": { 00:21:21.984 "name": "Nvme$subsystem", 00:21:21.984 "trtype": "$TEST_TRANSPORT", 00:21:21.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.984 "adrfam": "ipv4", 00:21:21.984 "trsvcid": "$NVMF_PORT", 00:21:21.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.984 "hdgst": ${hdgst:-false}, 00:21:21.984 "ddgst": ${ddgst:-false} 00:21:21.984 }, 00:21:21.984 "method": "bdev_nvme_attach_controller" 00:21:21.984 } 00:21:21.984 EOF 00:21:21.984 )") 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.984 { 00:21:21.984 "params": { 00:21:21.984 "name": "Nvme$subsystem", 00:21:21.984 "trtype": "$TEST_TRANSPORT", 00:21:21.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.984 "adrfam": "ipv4", 00:21:21.984 "trsvcid": "$NVMF_PORT", 00:21:21.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.984 "hdgst": ${hdgst:-false}, 00:21:21.984 "ddgst": ${ddgst:-false} 00:21:21.984 }, 00:21:21.984 "method": "bdev_nvme_attach_controller" 00:21:21.984 } 00:21:21.984 EOF 00:21:21.984 )") 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.984 { 00:21:21.984 "params": { 00:21:21.984 "name": "Nvme$subsystem", 00:21:21.984 "trtype": "$TEST_TRANSPORT", 00:21:21.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.984 "adrfam": "ipv4", 00:21:21.984 "trsvcid": "$NVMF_PORT", 00:21:21.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.984 "hdgst": ${hdgst:-false}, 00:21:21.984 "ddgst": ${ddgst:-false} 00:21:21.984 }, 00:21:21.984 "method": "bdev_nvme_attach_controller" 00:21:21.984 } 00:21:21.984 EOF 00:21:21.984 )") 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:21.984 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:21.984 18:13:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:21.984 "params": { 00:21:21.984 "name": "Nvme1", 00:21:21.984 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme2", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme3", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme4", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme5", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme6", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme7", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme8", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme9", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 },{ 00:21:21.985 "params": { 00:21:21.985 "name": "Nvme10", 00:21:21.985 "trtype": "rdma", 00:21:21.985 "traddr": "192.168.100.8", 00:21:21.985 "adrfam": "ipv4", 00:21:21.985 "trsvcid": "4420", 00:21:21.985 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:21.985 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:21.985 "hdgst": false, 00:21:21.985 "ddgst": false 00:21:21.985 }, 00:21:21.985 "method": "bdev_nvme_attach_controller" 00:21:21.985 }' 00:21:21.985 [2024-07-15 18:13:22.153501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.985 [2024-07-15 18:13:22.222976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1708861 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:22.922 18:13:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:23.858 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1708861 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1708536 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.858 { 00:21:23.858 "params": { 00:21:23.858 "name": "Nvme$subsystem", 00:21:23.858 "trtype": "$TEST_TRANSPORT", 00:21:23.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.858 "adrfam": "ipv4", 00:21:23.858 "trsvcid": "$NVMF_PORT", 00:21:23.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.858 "hdgst": ${hdgst:-false}, 00:21:23.858 "ddgst": ${ddgst:-false} 00:21:23.858 }, 00:21:23.858 "method": "bdev_nvme_attach_controller" 00:21:23.858 } 00:21:23.858 EOF 00:21:23.858 )") 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.858 { 00:21:23.858 "params": { 00:21:23.858 "name": "Nvme$subsystem", 00:21:23.858 "trtype": "$TEST_TRANSPORT", 00:21:23.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.858 "adrfam": "ipv4", 00:21:23.858 "trsvcid": "$NVMF_PORT", 00:21:23.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.858 "hdgst": ${hdgst:-false}, 00:21:23.858 "ddgst": ${ddgst:-false} 00:21:23.858 }, 00:21:23.858 "method": "bdev_nvme_attach_controller" 00:21:23.858 } 00:21:23.858 EOF 00:21:23.858 )") 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.858 { 00:21:23.858 "params": { 00:21:23.858 "name": "Nvme$subsystem", 00:21:23.858 "trtype": "$TEST_TRANSPORT", 00:21:23.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.858 "adrfam": "ipv4", 00:21:23.858 "trsvcid": "$NVMF_PORT", 00:21:23.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.858 "hdgst": ${hdgst:-false}, 00:21:23.858 "ddgst": ${ddgst:-false} 00:21:23.858 }, 00:21:23.858 "method": "bdev_nvme_attach_controller" 00:21:23.858 } 00:21:23.858 EOF 00:21:23.858 )") 00:21:23.858 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 [2024-07-15 18:13:24.137853] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:23.859 [2024-07-15 18:13:24.137905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709170 ] 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:23.859 { 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme$subsystem", 00:21:23.859 "trtype": "$TEST_TRANSPORT", 00:21:23.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "$NVMF_PORT", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.859 "hdgst": ${hdgst:-false}, 00:21:23.859 "ddgst": ${ddgst:-false} 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 } 00:21:23.859 EOF 00:21:23.859 )") 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:23.859 18:13:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme1", 00:21:23.859 "trtype": "rdma", 00:21:23.859 "traddr": "192.168.100.8", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "4420", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.859 "hdgst": false, 00:21:23.859 "ddgst": false 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 },{ 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme2", 00:21:23.859 "trtype": "rdma", 00:21:23.859 "traddr": "192.168.100.8", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "4420", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:23.859 "hdgst": false, 00:21:23.859 "ddgst": false 00:21:23.859 }, 00:21:23.859 "method": "bdev_nvme_attach_controller" 00:21:23.859 },{ 00:21:23.859 "params": { 00:21:23.859 "name": "Nvme3", 00:21:23.859 "trtype": "rdma", 00:21:23.859 "traddr": "192.168.100.8", 00:21:23.859 "adrfam": "ipv4", 00:21:23.859 "trsvcid": "4420", 00:21:23.859 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:23.859 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme4", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme5", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme6", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme7", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme8", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme9", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 },{ 00:21:23.860 "params": { 00:21:23.860 "name": "Nvme10", 00:21:23.860 "trtype": "rdma", 00:21:23.860 "traddr": "192.168.100.8", 00:21:23.860 "adrfam": "ipv4", 00:21:23.860 "trsvcid": "4420", 00:21:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:23.860 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:23.860 "hdgst": false, 00:21:23.860 "ddgst": false 00:21:23.860 }, 00:21:23.860 "method": "bdev_nvme_attach_controller" 00:21:23.860 }' 00:21:23.860 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.860 [2024-07-15 18:13:24.226281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.118 [2024-07-15 18:13:24.297913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.054 Running I/O for 1 seconds... 00:21:25.990 00:21:25.990 Latency(us) 00:21:25.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.990 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme1n1 : 1.16 371.38 23.21 0.00 0.00 166387.17 8021.61 236558.75 00:21:25.990 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme2n1 : 1.16 387.34 24.21 0.00 0.00 157185.52 4561.31 163577.86 00:21:25.990 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme3n1 : 1.17 411.78 25.74 0.00 0.00 149180.86 5321.52 156866.97 00:21:25.990 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme4n1 : 1.17 412.24 25.77 0.00 0.00 147064.92 6134.17 150994.94 00:21:25.990 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme5n1 : 1.17 394.91 24.68 0.00 0.00 151232.34 7287.60 140089.75 00:21:25.990 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme6n1 : 1.18 405.62 25.35 0.00 0.00 145398.09 7444.89 132540.01 00:21:25.990 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme7n1 : 1.18 416.24 26.02 0.00 0.00 139900.26 7811.89 127506.84 00:21:25.990 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme8n1 : 1.18 415.81 25.99 0.00 0.00 138075.95 8283.75 123312.54 00:21:25.990 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.990 Nvme9n1 : 1.18 393.46 24.59 0.00 0.00 143396.50 8336.18 112407.35 00:21:25.990 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.990 Verification LBA range: start 0x0 length 0x400 00:21:25.991 Nvme10n1 : 1.17 273.08 17.07 0.00 0.00 204566.20 8808.04 389231.41 00:21:25.991 =================================================================================================================== 00:21:25.991 Total : 3881.87 242.62 0.00 0.00 152302.06 4561.31 389231.41 00:21:26.249 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:26.249 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:26.249 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:26.249 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:26.249 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:26.250 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:26.250 rmmod nvme_rdma 00:21:26.509 rmmod nvme_fabrics 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1708536 ']' 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1708536 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1708536 ']' 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1708536 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1708536 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1708536' 00:21:26.509 killing process with pid 1708536 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1708536 00:21:26.509 18:13:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1708536 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:27.078 00:21:27.078 real 0m15.071s 00:21:27.078 user 0m31.511s 00:21:27.078 sys 0m7.433s 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.078 ************************************ 00:21:27.078 END TEST nvmf_shutdown_tc1 00:21:27.078 ************************************ 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:27.078 ************************************ 00:21:27.078 START TEST nvmf_shutdown_tc2 00:21:27.078 ************************************ 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:27.078 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:27.078 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:27.078 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:27.078 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:27.079 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:27.079 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:27.079 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:27.079 altname enp217s0f0np0 00:21:27.079 altname ens818f0np0 00:21:27.079 inet 192.168.100.8/24 scope global mlx_0_0 00:21:27.079 valid_lft forever preferred_lft forever 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:27.079 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:27.079 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:27.079 altname enp217s0f1np1 00:21:27.079 altname ens818f1np1 00:21:27.079 inet 192.168.100.9/24 scope global mlx_0_1 00:21:27.079 valid_lft forever preferred_lft forever 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:27.079 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:27.341 192.168.100.9' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:27.341 192.168.100.9' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:27.341 192.168.100.9' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1709804 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1709804 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1709804 ']' 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.341 18:13:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.341 [2024-07-15 18:13:27.577532] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:27.341 [2024-07-15 18:13:27.577586] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.341 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.341 [2024-07-15 18:13:27.661252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.341 [2024-07-15 18:13:27.730449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.341 [2024-07-15 18:13:27.730491] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.341 [2024-07-15 18:13:27.730501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.341 [2024-07-15 18:13:27.730509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.341 [2024-07-15 18:13:27.730531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.341 [2024-07-15 18:13:27.730640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.341 [2024-07-15 18:13:27.730716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.341 [2024-07-15 18:13:27.730806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.341 [2024-07-15 18:13:27.730807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.279 [2024-07-15 18:13:28.461706] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd140d0/0xd185c0) succeed. 00:21:28.279 [2024-07-15 18:13:28.470989] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd15710/0xd59c50) succeed. 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.279 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.280 18:13:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.280 Malloc1 00:21:28.539 [2024-07-15 18:13:28.693561] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:28.539 Malloc2 00:21:28.539 Malloc3 00:21:28.539 Malloc4 00:21:28.539 Malloc5 00:21:28.539 Malloc6 00:21:28.798 Malloc7 00:21:28.798 Malloc8 00:21:28.798 Malloc9 00:21:28.798 Malloc10 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1710123 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1710123 /var/tmp/bdevperf.sock 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1710123 ']' 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.798 { 00:21:28.798 "params": { 00:21:28.798 "name": "Nvme$subsystem", 00:21:28.798 "trtype": "$TEST_TRANSPORT", 00:21:28.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.798 "adrfam": "ipv4", 00:21:28.798 "trsvcid": "$NVMF_PORT", 00:21:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.798 "hdgst": ${hdgst:-false}, 00:21:28.798 "ddgst": ${ddgst:-false} 00:21:28.798 }, 00:21:28.798 "method": "bdev_nvme_attach_controller" 00:21:28.798 } 00:21:28.798 EOF 00:21:28.798 )") 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.798 { 00:21:28.798 "params": { 00:21:28.798 "name": "Nvme$subsystem", 00:21:28.798 "trtype": "$TEST_TRANSPORT", 00:21:28.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.798 "adrfam": "ipv4", 00:21:28.798 "trsvcid": "$NVMF_PORT", 00:21:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.798 "hdgst": ${hdgst:-false}, 00:21:28.798 "ddgst": ${ddgst:-false} 00:21:28.798 }, 00:21:28.798 "method": "bdev_nvme_attach_controller" 00:21:28.798 } 00:21:28.798 EOF 00:21:28.798 )") 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.798 { 00:21:28.798 "params": { 00:21:28.798 "name": "Nvme$subsystem", 00:21:28.798 "trtype": "$TEST_TRANSPORT", 00:21:28.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.798 "adrfam": "ipv4", 00:21:28.798 "trsvcid": "$NVMF_PORT", 00:21:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.798 "hdgst": ${hdgst:-false}, 00:21:28.798 "ddgst": ${ddgst:-false} 00:21:28.798 }, 00:21:28.798 "method": "bdev_nvme_attach_controller" 00:21:28.798 } 00:21:28.798 EOF 00:21:28.798 )") 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.798 { 00:21:28.798 "params": { 00:21:28.798 "name": "Nvme$subsystem", 00:21:28.798 "trtype": "$TEST_TRANSPORT", 00:21:28.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.798 "adrfam": "ipv4", 00:21:28.798 "trsvcid": "$NVMF_PORT", 00:21:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.798 "hdgst": ${hdgst:-false}, 00:21:28.798 "ddgst": ${ddgst:-false} 00:21:28.798 }, 00:21:28.798 "method": "bdev_nvme_attach_controller" 00:21:28.798 } 00:21:28.798 EOF 00:21:28.798 )") 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.798 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.798 { 00:21:28.798 "params": { 00:21:28.798 "name": "Nvme$subsystem", 00:21:28.798 "trtype": "$TEST_TRANSPORT", 00:21:28.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.798 "adrfam": "ipv4", 00:21:28.798 "trsvcid": "$NVMF_PORT", 00:21:28.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.799 "hdgst": ${hdgst:-false}, 00:21:28.799 "ddgst": ${ddgst:-false} 00:21:28.799 }, 00:21:28.799 "method": "bdev_nvme_attach_controller" 00:21:28.799 } 00:21:28.799 EOF 00:21:28.799 )") 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:28.799 [2024-07-15 18:13:29.182429] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:28.799 [2024-07-15 18:13:29.182482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710123 ] 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.799 { 00:21:28.799 "params": { 00:21:28.799 "name": "Nvme$subsystem", 00:21:28.799 "trtype": "$TEST_TRANSPORT", 00:21:28.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.799 "adrfam": "ipv4", 00:21:28.799 "trsvcid": "$NVMF_PORT", 00:21:28.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.799 "hdgst": ${hdgst:-false}, 00:21:28.799 "ddgst": ${ddgst:-false} 00:21:28.799 }, 00:21:28.799 "method": "bdev_nvme_attach_controller" 00:21:28.799 } 00:21:28.799 EOF 00:21:28.799 )") 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.799 { 00:21:28.799 "params": { 00:21:28.799 "name": "Nvme$subsystem", 00:21:28.799 "trtype": "$TEST_TRANSPORT", 00:21:28.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.799 "adrfam": "ipv4", 00:21:28.799 "trsvcid": "$NVMF_PORT", 00:21:28.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.799 "hdgst": ${hdgst:-false}, 00:21:28.799 "ddgst": ${ddgst:-false} 00:21:28.799 }, 00:21:28.799 "method": "bdev_nvme_attach_controller" 00:21:28.799 } 00:21:28.799 EOF 00:21:28.799 )") 00:21:28.799 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.058 { 00:21:29.058 "params": { 00:21:29.058 "name": "Nvme$subsystem", 00:21:29.058 "trtype": "$TEST_TRANSPORT", 00:21:29.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.058 "adrfam": "ipv4", 00:21:29.058 "trsvcid": "$NVMF_PORT", 00:21:29.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.058 "hdgst": ${hdgst:-false}, 00:21:29.058 "ddgst": ${ddgst:-false} 00:21:29.058 }, 00:21:29.058 "method": "bdev_nvme_attach_controller" 00:21:29.058 } 00:21:29.058 EOF 00:21:29.058 )") 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.058 { 00:21:29.058 "params": { 00:21:29.058 "name": "Nvme$subsystem", 00:21:29.058 "trtype": "$TEST_TRANSPORT", 00:21:29.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.058 "adrfam": "ipv4", 00:21:29.058 "trsvcid": "$NVMF_PORT", 00:21:29.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.058 "hdgst": ${hdgst:-false}, 00:21:29.058 "ddgst": ${ddgst:-false} 00:21:29.058 }, 00:21:29.058 "method": "bdev_nvme_attach_controller" 00:21:29.058 } 00:21:29.058 EOF 00:21:29.058 )") 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.058 { 00:21:29.058 "params": { 00:21:29.058 "name": "Nvme$subsystem", 00:21:29.058 "trtype": "$TEST_TRANSPORT", 00:21:29.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.058 "adrfam": "ipv4", 00:21:29.058 "trsvcid": "$NVMF_PORT", 00:21:29.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.058 "hdgst": ${hdgst:-false}, 00:21:29.058 "ddgst": ${ddgst:-false} 00:21:29.058 }, 00:21:29.058 "method": "bdev_nvme_attach_controller" 00:21:29.058 } 00:21:29.058 EOF 00:21:29.058 )") 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:29.058 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:29.058 18:13:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:29.058 "params": { 00:21:29.058 "name": "Nvme1", 00:21:29.058 "trtype": "rdma", 00:21:29.058 "traddr": "192.168.100.8", 00:21:29.058 "adrfam": "ipv4", 00:21:29.058 "trsvcid": "4420", 00:21:29.058 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.058 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.058 "hdgst": false, 00:21:29.058 "ddgst": false 00:21:29.058 }, 00:21:29.058 "method": "bdev_nvme_attach_controller" 00:21:29.058 },{ 00:21:29.058 "params": { 00:21:29.058 "name": "Nvme2", 00:21:29.058 "trtype": "rdma", 00:21:29.058 "traddr": "192.168.100.8", 00:21:29.058 "adrfam": "ipv4", 00:21:29.058 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme3", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme4", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme5", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme6", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme7", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme8", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme9", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 },{ 00:21:29.059 "params": { 00:21:29.059 "name": "Nvme10", 00:21:29.059 "trtype": "rdma", 00:21:29.059 "traddr": "192.168.100.8", 00:21:29.059 "adrfam": "ipv4", 00:21:29.059 "trsvcid": "4420", 00:21:29.059 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.059 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.059 "hdgst": false, 00:21:29.059 "ddgst": false 00:21:29.059 }, 00:21:29.059 "method": "bdev_nvme_attach_controller" 00:21:29.059 }' 00:21:29.059 [2024-07-15 18:13:29.268431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.059 [2024-07-15 18:13:29.338621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.995 Running I/O for 10 seconds... 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.995 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.254 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.254 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:30.254 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:30.254 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=139 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 139 -ge 100 ']' 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1710123 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1710123 ']' 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1710123 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.513 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1710123 00:21:30.772 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.772 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.772 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1710123' 00:21:30.772 killing process with pid 1710123 00:21:30.772 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1710123 00:21:30.772 18:13:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1710123 00:21:30.772 Received shutdown signal, test time was about 0.798710 seconds 00:21:30.772 00:21:30.772 Latency(us) 00:21:30.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.772 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme1n1 : 0.78 336.95 21.06 0.00 0.00 185951.65 6763.32 193776.84 00:21:30.772 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme2n1 : 0.78 356.92 22.31 0.00 0.00 172252.01 6815.74 182871.65 00:21:30.772 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme3n1 : 0.79 403.45 25.22 0.00 0.00 149585.40 4430.23 166933.30 00:21:30.772 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme4n1 : 0.79 406.64 25.42 0.00 0.00 145299.13 7497.32 132540.01 00:21:30.772 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme5n1 : 0.79 405.79 25.36 0.00 0.00 143335.75 8388.61 124990.26 00:21:30.772 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme6n1 : 0.79 405.02 25.31 0.00 0.00 140432.92 9227.47 111568.49 00:21:30.772 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme7n1 : 0.79 404.28 25.27 0.00 0.00 137661.48 9961.47 100243.87 00:21:30.772 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme8n1 : 0.79 403.60 25.23 0.00 0.00 134640.76 10538.19 91016.40 00:21:30.772 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme9n1 : 0.79 402.82 25.18 0.00 0.00 132471.85 11377.05 98146.71 00:21:30.772 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.772 Verification LBA range: start 0x0 length 0x400 00:21:30.772 Nvme10n1 : 0.80 320.77 20.05 0.00 0.00 162117.94 2857.37 198810.01 00:21:30.772 =================================================================================================================== 00:21:30.772 Total : 3846.23 240.39 0.00 0.00 149184.05 2857.37 198810.01 00:21:31.031 18:13:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1709804 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:31.966 rmmod nvme_rdma 00:21:31.966 rmmod nvme_fabrics 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1709804 ']' 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1709804 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1709804 ']' 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1709804 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:31.966 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.967 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1709804 00:21:32.225 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:32.225 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:32.225 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1709804' 00:21:32.225 killing process with pid 1709804 00:21:32.225 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1709804 00:21:32.225 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1709804 00:21:32.484 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:32.484 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:32.484 00:21:32.484 real 0m5.571s 00:21:32.484 user 0m22.339s 00:21:32.484 sys 0m1.223s 00:21:32.484 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.484 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.484 ************************************ 00:21:32.484 END TEST nvmf_shutdown_tc2 00:21:32.484 ************************************ 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:32.744 ************************************ 00:21:32.744 START TEST nvmf_shutdown_tc3 00:21:32.744 ************************************ 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:32.744 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:32.744 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:32.744 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:32.744 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:32.744 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:32.745 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:32.745 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.745 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.745 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.745 18:13:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:32.745 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.745 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:32.745 altname enp217s0f0np0 00:21:32.745 altname ens818f0np0 00:21:32.745 inet 192.168.100.8/24 scope global mlx_0_0 00:21:32.745 valid_lft forever preferred_lft forever 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:32.745 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.745 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:32.745 altname enp217s0f1np1 00:21:32.745 altname ens818f1np1 00:21:32.745 inet 192.168.100.9/24 scope global mlx_0_1 00:21:32.745 valid_lft forever preferred_lft forever 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:32.745 192.168.100.9' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:32.745 192.168.100.9' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:32.745 192.168.100.9' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:32.745 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1710998 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1710998 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1710998 ']' 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.004 18:13:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:33.004 [2024-07-15 18:13:33.217289] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:33.004 [2024-07-15 18:13:33.217336] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.004 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.004 [2024-07-15 18:13:33.300099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.004 [2024-07-15 18:13:33.373004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.004 [2024-07-15 18:13:33.373049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.004 [2024-07-15 18:13:33.373069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.004 [2024-07-15 18:13:33.373077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.004 [2024-07-15 18:13:33.373100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.004 [2024-07-15 18:13:33.373200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.004 [2024-07-15 18:13:33.373309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.004 [2024-07-15 18:13:33.373420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.004 [2024-07-15 18:13:33.373421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.940 [2024-07-15 18:13:34.096483] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5620d0/0x5665c0) succeed. 00:21:33.940 [2024-07-15 18:13:34.105711] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x563710/0x5a7c50) succeed. 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.940 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:33.940 Malloc1 00:21:33.940 [2024-07-15 18:13:34.323038] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:34.199 Malloc2 00:21:34.199 Malloc3 00:21:34.199 Malloc4 00:21:34.199 Malloc5 00:21:34.199 Malloc6 00:21:34.199 Malloc7 00:21:34.457 Malloc8 00:21:34.457 Malloc9 00:21:34.457 Malloc10 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1711316 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1711316 /var/tmp/bdevperf.sock 00:21:34.457 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1711316 ']' 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 [2024-07-15 18:13:34.816009] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:34.458 [2024-07-15 18:13:34.816069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711316 ] 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.458 { 00:21:34.458 "params": { 00:21:34.458 "name": "Nvme$subsystem", 00:21:34.458 "trtype": "$TEST_TRANSPORT", 00:21:34.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.458 "adrfam": "ipv4", 00:21:34.458 "trsvcid": "$NVMF_PORT", 00:21:34.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.458 "hdgst": ${hdgst:-false}, 00:21:34.458 "ddgst": ${ddgst:-false} 00:21:34.458 }, 00:21:34.458 "method": "bdev_nvme_attach_controller" 00:21:34.458 } 00:21:34.458 EOF 00:21:34.458 )") 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.458 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:34.458 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.719 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:34.719 18:13:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme1", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme2", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme3", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme4", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme5", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme6", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme7", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme8", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.719 "hdgst": false, 00:21:34.719 "ddgst": false 00:21:34.719 }, 00:21:34.719 "method": "bdev_nvme_attach_controller" 00:21:34.719 },{ 00:21:34.719 "params": { 00:21:34.719 "name": "Nvme9", 00:21:34.719 "trtype": "rdma", 00:21:34.719 "traddr": "192.168.100.8", 00:21:34.719 "adrfam": "ipv4", 00:21:34.719 "trsvcid": "4420", 00:21:34.719 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.719 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.720 "hdgst": false, 00:21:34.720 "ddgst": false 00:21:34.720 }, 00:21:34.720 "method": "bdev_nvme_attach_controller" 00:21:34.720 },{ 00:21:34.720 "params": { 00:21:34.720 "name": "Nvme10", 00:21:34.720 "trtype": "rdma", 00:21:34.720 "traddr": "192.168.100.8", 00:21:34.720 "adrfam": "ipv4", 00:21:34.720 "trsvcid": "4420", 00:21:34.720 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.720 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.720 "hdgst": false, 00:21:34.720 "ddgst": false 00:21:34.720 }, 00:21:34.720 "method": "bdev_nvme_attach_controller" 00:21:34.720 }' 00:21:34.720 [2024-07-15 18:13:34.902404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.720 [2024-07-15 18:13:34.971865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.658 Running I/O for 10 seconds... 00:21:35.658 18:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.658 18:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:35.658 18:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:35.658 18:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.658 18:13:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.658 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:35.916 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.916 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=4 00:21:35.916 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 4 -ge 100 ']' 00:21:35.916 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=155 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1710998 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1710998 ']' 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1710998 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1710998 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1710998' 00:21:36.175 killing process with pid 1710998 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1710998 00:21:36.175 18:13:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1710998 00:21:36.745 18:13:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:36.745 18:13:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:37.316 [2024-07-15 18:13:37.621185] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.623744] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.625914] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.628044] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.630003] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.631816] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.633608] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.635943] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:21:37.316 [2024-07-15 18:13:37.635990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.316 [2024-07-15 18:13:37.636239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183d00 00:21:37.316 [2024-07-15 18:13:37.636252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183d00 00:21:37.317 [2024-07-15 18:13:37.636877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.636904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.636932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.636959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.636974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.636987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.317 [2024-07-15 18:13:37.637343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183600 00:21:37.317 [2024-07-15 18:13:37.637355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183600 00:21:37.318 [2024-07-15 18:13:37.637742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.637770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.637785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183a00 00:21:37.318 [2024-07-15 18:13:37.637797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640071] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:21:37.318 [2024-07-15 18:13:37.640098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183c00 00:21:37.318 [2024-07-15 18:13:37.640455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183e00 00:21:37.318 [2024-07-15 18:13:37.640706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.318 [2024-07-15 18:13:37.640720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.640981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.640996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183e00 00:21:37.319 [2024-07-15 18:13:37.641323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184300 00:21:37.319 [2024-07-15 18:13:37.641792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.319 [2024-07-15 18:13:37.641806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184300 00:21:37.320 [2024-07-15 18:13:37.641819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.641849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184300 00:21:37.320 [2024-07-15 18:13:37.641862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.641877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183c00 00:21:37.320 [2024-07-15 18:13:37.641889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8be2f000 sqhd:52b0 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.644604] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:21:37.320 [2024-07-15 18:13:37.644678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.644694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.644708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.644721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.644735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.644748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.644761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.644775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.647072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.320 [2024-07-15 18:13:37.647092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.320 [2024-07-15 18:13:37.647106] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.320 [2024-07-15 18:13:37.647126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.647139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.647152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.647166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.647179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.647192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.647205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.647217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.649191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.320 [2024-07-15 18:13:37.649207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:37.320 [2024-07-15 18:13:37.649220] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.320 [2024-07-15 18:13:37.649238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.649252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.649265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.649278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.649303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.649316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.649330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.649342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.651517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.320 [2024-07-15 18:13:37.651534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:37.320 [2024-07-15 18:13:37.651546] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.320 [2024-07-15 18:13:37.651565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.651577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.651591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.651603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.651616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.651629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.651642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.651655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.653260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.320 [2024-07-15 18:13:37.653277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:37.320 [2024-07-15 18:13:37.653289] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.320 [2024-07-15 18:13:37.653311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.653324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.653338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.653363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.653375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.653388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.653401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.655421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.320 [2024-07-15 18:13:37.655438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:37.320 [2024-07-15 18:13:37.655451] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.320 [2024-07-15 18:13:37.655468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.655482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.655495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.655507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.655521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.655534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.655547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.320 [2024-07-15 18:13:37.655559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.320 [2024-07-15 18:13:37.657487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.320 [2024-07-15 18:13:37.657506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:37.320 [2024-07-15 18:13:37.657518] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.657537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.657551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.657565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.657577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.657591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.657605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.657618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.657630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.659540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.321 [2024-07-15 18:13:37.659558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:37.321 [2024-07-15 18:13:37.659570] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.659588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.659604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.659618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.659630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.659643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.659668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.659681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.661572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.321 [2024-07-15 18:13:37.661589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:37.321 [2024-07-15 18:13:37.661601] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.661619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.661632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.661645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.661657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.661670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.661683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.661696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.661708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.663534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.321 [2024-07-15 18:13:37.663551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:37.321 [2024-07-15 18:13:37.663563] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.663581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.663594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.663608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.663620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.663634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.663649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.663663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.321 [2024-07-15 18:13:37.663675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:26046 cdw0:0 sqhd:f400 p:0 m:0 dnr:0 00:21:37.321 [2024-07-15 18:13:37.681753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:37.321 [2024-07-15 18:13:37.681772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:37.321 [2024-07-15 18:13:37.681782] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.687504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.321 [2024-07-15 18:13:37.687530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:37.321 [2024-07-15 18:13:37.687541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:37.321 [2024-07-15 18:13:37.687566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:37.321 [2024-07-15 18:13:37.687577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:37.321 [2024-07-15 18:13:37.687632] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.687646] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.687658] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.687671] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.687682] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.321 [2024-07-15 18:13:37.690310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:37.321 [2024-07-15 18:13:37.690327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:37.321 [2024-07-15 18:13:37.690337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:37.321 task offset: 32768 on job bdev=Nvme6n1 fails 00:21:37.321 00:21:37.321 Latency(us) 00:21:37.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme1n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme1n1 : 1.84 139.42 8.71 34.72 0.00 364234.27 5793.38 1053609.16 00:21:37.321 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme2n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme2n1 : 1.84 139.91 8.74 34.71 0.00 359949.60 6107.96 1053609.16 00:21:37.321 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme3n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme3n1 : 1.84 138.78 8.67 34.70 0.00 359339.79 28311.55 1053609.16 00:21:37.321 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme4n1 ended in about 1.85 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme4n1 : 1.85 153.91 9.62 34.68 0.00 327632.28 5295.31 1053609.16 00:21:37.321 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme5n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme5n1 : 1.84 139.25 8.70 34.81 0.00 353351.11 36490.44 1053609.16 00:21:37.321 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme6n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme6n1 : 1.84 139.19 8.70 34.80 0.00 350271.90 46347.06 1053609.16 00:21:37.321 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme7n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme7n1 : 1.84 147.82 9.24 34.78 0.00 330690.91 10276.04 1053609.16 00:21:37.321 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme8n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme8n1 : 1.84 142.87 8.93 34.77 0.00 336820.09 14470.35 1053609.16 00:21:37.321 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme9n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme9n1 : 1.84 139.00 8.69 34.75 0.00 341810.54 44879.05 1107296.26 00:21:37.321 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.321 Job: Nvme10n1 ended in about 1.84 seconds with error 00:21:37.321 Verification LBA range: start 0x0 length 0x400 00:21:37.321 Nvme10n1 : 1.84 104.21 6.51 34.74 0.00 423558.76 52009.37 1093874.48 00:21:37.321 =================================================================================================================== 00:21:37.321 Total : 1384.36 86.52 347.45 0.00 352993.68 5295.31 1107296.26 00:21:37.321 [2024-07-15 18:13:37.711664] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:37.321 [2024-07-15 18:13:37.711686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:37.321 [2024-07-15 18:13:37.711698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:37.580 [2024-07-15 18:13:37.723764] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.580 [2024-07-15 18:13:37.723826] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.580 [2024-07-15 18:13:37.723856] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:21:37.580 [2024-07-15 18:13:37.726878] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.580 [2024-07-15 18:13:37.726927] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.580 [2024-07-15 18:13:37.726954] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:21:37.580 [2024-07-15 18:13:37.727099] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.580 [2024-07-15 18:13:37.727130] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.580 [2024-07-15 18:13:37.727140] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:21:37.580 [2024-07-15 18:13:37.727233] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.580 [2024-07-15 18:13:37.727246] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.580 [2024-07-15 18:13:37.727256] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:21:37.580 [2024-07-15 18:13:37.727345] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.581 [2024-07-15 18:13:37.727364] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.581 [2024-07-15 18:13:37.727374] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:21:37.581 [2024-07-15 18:13:37.727494] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.581 [2024-07-15 18:13:37.727508] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.581 [2024-07-15 18:13:37.727517] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:21:37.581 [2024-07-15 18:13:37.727609] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.581 [2024-07-15 18:13:37.727623] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.581 [2024-07-15 18:13:37.727633] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:21:37.581 [2024-07-15 18:13:37.727720] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.581 [2024-07-15 18:13:37.727734] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.581 [2024-07-15 18:13:37.727743] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:21:37.581 [2024-07-15 18:13:37.728271] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.581 [2024-07-15 18:13:37.728288] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.581 [2024-07-15 18:13:37.728298] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:21:37.581 [2024-07-15 18:13:37.728388] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:37.581 [2024-07-15 18:13:37.728401] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:37.581 [2024-07-15 18:13:37.728411] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1711316 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:37.840 rmmod nvme_rdma 00:21:37.840 rmmod nvme_fabrics 00:21:37.840 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 1711316 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:21:37.840 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:37.841 00:21:37.841 real 0m5.198s 00:21:37.841 user 0m17.570s 00:21:37.841 sys 0m1.344s 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.841 ************************************ 00:21:37.841 END TEST nvmf_shutdown_tc3 00:21:37.841 ************************************ 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:37.841 00:21:37.841 real 0m26.207s 00:21:37.841 user 1m11.567s 00:21:37.841 sys 0m10.249s 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:37.841 18:13:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.841 ************************************ 00:21:37.841 END TEST nvmf_shutdown 00:21:37.841 ************************************ 00:21:37.841 18:13:38 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:37.841 18:13:38 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:37.841 18:13:38 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.841 18:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:38.100 18:13:38 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:38.100 18:13:38 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:38.100 18:13:38 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:38.100 ************************************ 00:21:38.100 START TEST nvmf_multicontroller 00:21:38.100 ************************************ 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:21:38.100 * Looking for test storage... 00:21:38.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:38.100 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:21:38.100 00:21:38.100 real 0m0.135s 00:21:38.100 user 0m0.057s 00:21:38.100 sys 0m0.088s 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:38.100 18:13:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:38.100 ************************************ 00:21:38.100 END TEST nvmf_multicontroller 00:21:38.100 ************************************ 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:38.100 18:13:38 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.100 18:13:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:38.366 ************************************ 00:21:38.366 START TEST nvmf_aer 00:21:38.366 ************************************ 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:21:38.366 * Looking for test storage... 00:21:38.366 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.366 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.367 18:13:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:46.484 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:46.484 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:46.484 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:46.484 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:46.484 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:46.485 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:46.485 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:46.485 altname enp217s0f0np0 00:21:46.485 altname ens818f0np0 00:21:46.485 inet 192.168.100.8/24 scope global mlx_0_0 00:21:46.485 valid_lft forever preferred_lft forever 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:46.485 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:46.485 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:46.485 altname enp217s0f1np1 00:21:46.485 altname ens818f1np1 00:21:46.485 inet 192.168.100.9/24 scope global mlx_0_1 00:21:46.485 valid_lft forever preferred_lft forever 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:46.485 192.168.100.9' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:46.485 192.168.100.9' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:46.485 192.168.100.9' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1715903 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1715903 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1715903 ']' 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.485 18:13:46 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:46.485 [2024-07-15 18:13:46.788934] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:46.485 [2024-07-15 18:13:46.788986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.485 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.485 [2024-07-15 18:13:46.870507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.745 [2024-07-15 18:13:46.941128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.745 [2024-07-15 18:13:46.941172] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.745 [2024-07-15 18:13:46.941182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.745 [2024-07-15 18:13:46.941190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.745 [2024-07-15 18:13:46.941196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.745 [2024-07-15 18:13:46.941290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.745 [2024-07-15 18:13:46.941382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.745 [2024-07-15 18:13:46.941470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.745 [2024-07-15 18:13:46.941471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.313 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.313 [2024-07-15 18:13:47.673986] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfa5f80/0xfaa470) succeed. 00:21:47.313 [2024-07-15 18:13:47.683503] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfa75c0/0xfebb00) succeed. 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.572 Malloc0 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.572 [2024-07-15 18:13:47.850801] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.572 [ 00:21:47.572 { 00:21:47.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.572 "subtype": "Discovery", 00:21:47.572 "listen_addresses": [], 00:21:47.572 "allow_any_host": true, 00:21:47.572 "hosts": [] 00:21:47.572 }, 00:21:47.572 { 00:21:47.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.572 "subtype": "NVMe", 00:21:47.572 "listen_addresses": [ 00:21:47.572 { 00:21:47.572 "trtype": "RDMA", 00:21:47.572 "adrfam": "IPv4", 00:21:47.572 "traddr": "192.168.100.8", 00:21:47.572 "trsvcid": "4420" 00:21:47.572 } 00:21:47.572 ], 00:21:47.572 "allow_any_host": true, 00:21:47.572 "hosts": [], 00:21:47.572 "serial_number": "SPDK00000000000001", 00:21:47.572 "model_number": "SPDK bdev Controller", 00:21:47.572 "max_namespaces": 2, 00:21:47.572 "min_cntlid": 1, 00:21:47.572 "max_cntlid": 65519, 00:21:47.572 "namespaces": [ 00:21:47.572 { 00:21:47.572 "nsid": 1, 00:21:47.572 "bdev_name": "Malloc0", 00:21:47.572 "name": "Malloc0", 00:21:47.572 "nguid": "49293E0C9C6F41138728F4DAB354939A", 00:21:47.572 "uuid": "49293e0c-9c6f-4113-8728-f4dab354939a" 00:21:47.572 } 00:21:47.572 ] 00:21:47.572 } 00:21:47.572 ] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=1716189 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:47.572 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:47.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.831 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.831 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:47.831 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:47.831 18:13:47 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 Malloc1 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 [ 00:21:47.831 { 00:21:47.831 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.831 "subtype": "Discovery", 00:21:47.831 "listen_addresses": [], 00:21:47.831 "allow_any_host": true, 00:21:47.831 "hosts": [] 00:21:47.831 }, 00:21:47.831 { 00:21:47.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.831 "subtype": "NVMe", 00:21:47.831 "listen_addresses": [ 00:21:47.831 { 00:21:47.831 "trtype": "RDMA", 00:21:47.831 "adrfam": "IPv4", 00:21:47.831 "traddr": "192.168.100.8", 00:21:47.831 "trsvcid": "4420" 00:21:47.831 } 00:21:47.831 ], 00:21:47.831 "allow_any_host": true, 00:21:47.831 "hosts": [], 00:21:47.831 "serial_number": "SPDK00000000000001", 00:21:47.831 "model_number": "SPDK bdev Controller", 00:21:47.831 "max_namespaces": 2, 00:21:47.831 "min_cntlid": 1, 00:21:47.831 "max_cntlid": 65519, 00:21:47.831 "namespaces": [ 00:21:47.831 { 00:21:47.831 "nsid": 1, 00:21:47.831 "bdev_name": "Malloc0", 00:21:47.831 "name": "Malloc0", 00:21:47.831 "nguid": "49293E0C9C6F41138728F4DAB354939A", 00:21:47.831 "uuid": "49293e0c-9c6f-4113-8728-f4dab354939a" 00:21:47.831 }, 00:21:47.831 { 00:21:47.831 "nsid": 2, 00:21:47.831 "bdev_name": "Malloc1", 00:21:47.831 "name": "Malloc1", 00:21:47.831 "nguid": "ED1C610B4E20464CA097E161BB00359D", 00:21:47.831 "uuid": "ed1c610b-4e20-464c-a097-e161bb00359d" 00:21:47.831 } 00:21:47.831 ] 00:21:47.831 } 00:21:47.831 ] 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 1716189 00:21:47.831 Asynchronous Event Request test 00:21:47.831 Attaching to 192.168.100.8 00:21:47.831 Attached to 192.168.100.8 00:21:47.831 Registering asynchronous event callbacks... 00:21:47.831 Starting namespace attribute notice tests for all controllers... 00:21:47.831 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:47.831 aer_cb - Changed Namespace 00:21:47.831 Cleaning up... 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:48.090 rmmod nvme_rdma 00:21:48.090 rmmod nvme_fabrics 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1715903 ']' 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1715903 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1715903 ']' 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1715903 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1715903 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1715903' 00:21:48.090 killing process with pid 1715903 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1715903 00:21:48.090 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1715903 00:21:48.349 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:48.349 18:13:48 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:48.349 00:21:48.349 real 0m10.113s 00:21:48.349 user 0m8.883s 00:21:48.349 sys 0m6.715s 00:21:48.349 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.349 18:13:48 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.349 ************************************ 00:21:48.349 END TEST nvmf_aer 00:21:48.349 ************************************ 00:21:48.349 18:13:48 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:48.349 18:13:48 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:48.349 18:13:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.349 18:13:48 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.349 18:13:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:48.349 ************************************ 00:21:48.349 START TEST nvmf_async_init 00:21:48.349 ************************************ 00:21:48.349 18:13:48 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:21:48.608 * Looking for test storage... 00:21:48.608 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7ab3d377549842e5b11eb2fcfcc18a6d 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.608 18:13:48 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:56.728 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:56.728 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:56.729 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:56.729 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:56.729 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:56.729 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:56.729 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:56.729 altname enp217s0f0np0 00:21:56.729 altname ens818f0np0 00:21:56.729 inet 192.168.100.8/24 scope global mlx_0_0 00:21:56.729 valid_lft forever preferred_lft forever 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:56.729 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:56.729 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:56.729 altname enp217s0f1np1 00:21:56.729 altname ens818f1np1 00:21:56.729 inet 192.168.100.9/24 scope global mlx_0_1 00:21:56.729 valid_lft forever preferred_lft forever 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:56.729 192.168.100.9' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:56.729 192.168.100.9' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:56.729 192.168.100.9' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1720242 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1720242 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1720242 ']' 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.729 18:13:56 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.729 [2024-07-15 18:13:56.952458] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:21:56.729 [2024-07-15 18:13:56.952514] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.729 [2024-07-15 18:13:57.034853] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.729 [2024-07-15 18:13:57.107667] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.729 [2024-07-15 18:13:57.107708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.729 [2024-07-15 18:13:57.107718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.729 [2024-07-15 18:13:57.107727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.729 [2024-07-15 18:13:57.107733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.729 [2024-07-15 18:13:57.107754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 [2024-07-15 18:13:57.827874] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1587b20/0x158c010) succeed. 00:21:57.665 [2024-07-15 18:13:57.837136] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1589020/0x15cd6a0) succeed. 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 null0 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7ab3d377549842e5b11eb2fcfcc18a6d 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 [2024-07-15 18:13:57.919295] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:57 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 nvme0n1 00:21:57.665 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.665 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.665 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.665 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 [ 00:21:57.665 { 00:21:57.665 "name": "nvme0n1", 00:21:57.665 "aliases": [ 00:21:57.665 "7ab3d377-5498-42e5-b11e-b2fcfcc18a6d" 00:21:57.665 ], 00:21:57.665 "product_name": "NVMe disk", 00:21:57.665 "block_size": 512, 00:21:57.665 "num_blocks": 2097152, 00:21:57.665 "uuid": "7ab3d377-5498-42e5-b11e-b2fcfcc18a6d", 00:21:57.665 "assigned_rate_limits": { 00:21:57.665 "rw_ios_per_sec": 0, 00:21:57.665 "rw_mbytes_per_sec": 0, 00:21:57.665 "r_mbytes_per_sec": 0, 00:21:57.665 "w_mbytes_per_sec": 0 00:21:57.665 }, 00:21:57.665 "claimed": false, 00:21:57.665 "zoned": false, 00:21:57.665 "supported_io_types": { 00:21:57.665 "read": true, 00:21:57.665 "write": true, 00:21:57.665 "unmap": false, 00:21:57.665 "flush": true, 00:21:57.665 "reset": true, 00:21:57.665 "nvme_admin": true, 00:21:57.665 "nvme_io": true, 00:21:57.665 "nvme_io_md": false, 00:21:57.665 "write_zeroes": true, 00:21:57.665 "zcopy": false, 00:21:57.665 "get_zone_info": false, 00:21:57.665 "zone_management": false, 00:21:57.665 "zone_append": false, 00:21:57.665 "compare": true, 00:21:57.666 "compare_and_write": true, 00:21:57.666 "abort": true, 00:21:57.666 "seek_hole": false, 00:21:57.666 "seek_data": false, 00:21:57.666 "copy": true, 00:21:57.666 "nvme_iov_md": false 00:21:57.666 }, 00:21:57.666 "memory_domains": [ 00:21:57.666 { 00:21:57.666 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:57.666 "dma_device_type": 0 00:21:57.666 } 00:21:57.666 ], 00:21:57.666 "driver_specific": { 00:21:57.666 "nvme": [ 00:21:57.666 { 00:21:57.666 "trid": { 00:21:57.666 "trtype": "RDMA", 00:21:57.666 "adrfam": "IPv4", 00:21:57.666 "traddr": "192.168.100.8", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.666 }, 00:21:57.666 "ctrlr_data": { 00:21:57.666 "cntlid": 1, 00:21:57.666 "vendor_id": "0x8086", 00:21:57.666 "model_number": "SPDK bdev Controller", 00:21:57.666 "serial_number": "00000000000000000000", 00:21:57.666 "firmware_revision": "24.09", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.666 "oacs": { 00:21:57.666 "security": 0, 00:21:57.666 "format": 0, 00:21:57.666 "firmware": 0, 00:21:57.666 "ns_manage": 0 00:21:57.666 }, 00:21:57.666 "multi_ctrlr": true, 00:21:57.666 "ana_reporting": false 00:21:57.666 }, 00:21:57.666 "vs": { 00:21:57.666 "nvme_version": "1.3" 00:21:57.666 }, 00:21:57.666 "ns_data": { 00:21:57.666 "id": 1, 00:21:57.666 "can_share": true 00:21:57.666 } 00:21:57.666 } 00:21:57.666 ], 00:21:57.666 "mp_policy": "active_passive" 00:21:57.666 } 00:21:57.666 } 00:21:57.666 ] 00:21:57.666 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.666 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:57.666 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 [2024-07-15 18:13:58.044404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.924 [2024-07-15 18:13:58.066950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:57.924 [2024-07-15 18:13:58.088199] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 [ 00:21:57.924 { 00:21:57.924 "name": "nvme0n1", 00:21:57.924 "aliases": [ 00:21:57.924 "7ab3d377-5498-42e5-b11e-b2fcfcc18a6d" 00:21:57.924 ], 00:21:57.924 "product_name": "NVMe disk", 00:21:57.924 "block_size": 512, 00:21:57.924 "num_blocks": 2097152, 00:21:57.924 "uuid": "7ab3d377-5498-42e5-b11e-b2fcfcc18a6d", 00:21:57.924 "assigned_rate_limits": { 00:21:57.924 "rw_ios_per_sec": 0, 00:21:57.924 "rw_mbytes_per_sec": 0, 00:21:57.924 "r_mbytes_per_sec": 0, 00:21:57.924 "w_mbytes_per_sec": 0 00:21:57.924 }, 00:21:57.924 "claimed": false, 00:21:57.924 "zoned": false, 00:21:57.924 "supported_io_types": { 00:21:57.924 "read": true, 00:21:57.924 "write": true, 00:21:57.924 "unmap": false, 00:21:57.924 "flush": true, 00:21:57.924 "reset": true, 00:21:57.924 "nvme_admin": true, 00:21:57.924 "nvme_io": true, 00:21:57.924 "nvme_io_md": false, 00:21:57.924 "write_zeroes": true, 00:21:57.924 "zcopy": false, 00:21:57.924 "get_zone_info": false, 00:21:57.924 "zone_management": false, 00:21:57.924 "zone_append": false, 00:21:57.924 "compare": true, 00:21:57.924 "compare_and_write": true, 00:21:57.924 "abort": true, 00:21:57.924 "seek_hole": false, 00:21:57.924 "seek_data": false, 00:21:57.924 "copy": true, 00:21:57.924 "nvme_iov_md": false 00:21:57.924 }, 00:21:57.924 "memory_domains": [ 00:21:57.924 { 00:21:57.924 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:57.924 "dma_device_type": 0 00:21:57.924 } 00:21:57.924 ], 00:21:57.924 "driver_specific": { 00:21:57.924 "nvme": [ 00:21:57.924 { 00:21:57.924 "trid": { 00:21:57.924 "trtype": "RDMA", 00:21:57.924 "adrfam": "IPv4", 00:21:57.924 "traddr": "192.168.100.8", 00:21:57.924 "trsvcid": "4420", 00:21:57.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.924 }, 00:21:57.924 "ctrlr_data": { 00:21:57.924 "cntlid": 2, 00:21:57.924 "vendor_id": "0x8086", 00:21:57.924 "model_number": "SPDK bdev Controller", 00:21:57.924 "serial_number": "00000000000000000000", 00:21:57.924 "firmware_revision": "24.09", 00:21:57.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.924 "oacs": { 00:21:57.924 "security": 0, 00:21:57.924 "format": 0, 00:21:57.924 "firmware": 0, 00:21:57.924 "ns_manage": 0 00:21:57.924 }, 00:21:57.924 "multi_ctrlr": true, 00:21:57.924 "ana_reporting": false 00:21:57.924 }, 00:21:57.924 "vs": { 00:21:57.924 "nvme_version": "1.3" 00:21:57.924 }, 00:21:57.924 "ns_data": { 00:21:57.924 "id": 1, 00:21:57.924 "can_share": true 00:21:57.924 } 00:21:57.924 } 00:21:57.924 ], 00:21:57.924 "mp_policy": "active_passive" 00:21:57.924 } 00:21:57.924 } 00:21:57.924 ] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.oJpZrvxzsx 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.oJpZrvxzsx 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 [2024-07-15 18:13:58.170960] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oJpZrvxzsx 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.oJpZrvxzsx 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 [2024-07-15 18:13:58.186999] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.924 nvme0n1 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.924 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.924 [ 00:21:57.924 { 00:21:57.924 "name": "nvme0n1", 00:21:57.924 "aliases": [ 00:21:57.924 "7ab3d377-5498-42e5-b11e-b2fcfcc18a6d" 00:21:57.924 ], 00:21:57.924 "product_name": "NVMe disk", 00:21:57.924 "block_size": 512, 00:21:57.924 "num_blocks": 2097152, 00:21:57.924 "uuid": "7ab3d377-5498-42e5-b11e-b2fcfcc18a6d", 00:21:57.924 "assigned_rate_limits": { 00:21:57.924 "rw_ios_per_sec": 0, 00:21:57.924 "rw_mbytes_per_sec": 0, 00:21:57.924 "r_mbytes_per_sec": 0, 00:21:57.924 "w_mbytes_per_sec": 0 00:21:57.924 }, 00:21:57.924 "claimed": false, 00:21:57.924 "zoned": false, 00:21:57.924 "supported_io_types": { 00:21:57.924 "read": true, 00:21:57.924 "write": true, 00:21:57.924 "unmap": false, 00:21:57.924 "flush": true, 00:21:57.924 "reset": true, 00:21:57.924 "nvme_admin": true, 00:21:57.924 "nvme_io": true, 00:21:57.924 "nvme_io_md": false, 00:21:57.924 "write_zeroes": true, 00:21:57.924 "zcopy": false, 00:21:57.924 "get_zone_info": false, 00:21:57.924 "zone_management": false, 00:21:57.924 "zone_append": false, 00:21:57.924 "compare": true, 00:21:57.924 "compare_and_write": true, 00:21:57.924 "abort": true, 00:21:57.924 "seek_hole": false, 00:21:57.924 "seek_data": false, 00:21:57.924 "copy": true, 00:21:57.924 "nvme_iov_md": false 00:21:57.924 }, 00:21:57.924 "memory_domains": [ 00:21:57.924 { 00:21:57.924 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:21:57.924 "dma_device_type": 0 00:21:57.924 } 00:21:57.924 ], 00:21:57.924 "driver_specific": { 00:21:57.924 "nvme": [ 00:21:57.924 { 00:21:57.924 "trid": { 00:21:57.924 "trtype": "RDMA", 00:21:57.924 "adrfam": "IPv4", 00:21:57.924 "traddr": "192.168.100.8", 00:21:57.924 "trsvcid": "4421", 00:21:57.924 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.924 }, 00:21:57.924 "ctrlr_data": { 00:21:57.924 "cntlid": 3, 00:21:57.924 "vendor_id": "0x8086", 00:21:57.924 "model_number": "SPDK bdev Controller", 00:21:57.924 "serial_number": "00000000000000000000", 00:21:57.924 "firmware_revision": "24.09", 00:21:57.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.924 "oacs": { 00:21:57.924 "security": 0, 00:21:57.924 "format": 0, 00:21:57.924 "firmware": 0, 00:21:57.924 "ns_manage": 0 00:21:57.924 }, 00:21:57.924 "multi_ctrlr": true, 00:21:57.924 "ana_reporting": false 00:21:57.924 }, 00:21:57.924 "vs": { 00:21:57.924 "nvme_version": "1.3" 00:21:57.925 }, 00:21:57.925 "ns_data": { 00:21:57.925 "id": 1, 00:21:57.925 "can_share": true 00:21:57.925 } 00:21:57.925 } 00:21:57.925 ], 00:21:57.925 "mp_policy": "active_passive" 00:21:57.925 } 00:21:57.925 } 00:21:57.925 ] 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.oJpZrvxzsx 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.925 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:58.183 rmmod nvme_rdma 00:21:58.183 rmmod nvme_fabrics 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1720242 ']' 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1720242 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1720242 ']' 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1720242 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1720242 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1720242' 00:21:58.183 killing process with pid 1720242 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1720242 00:21:58.183 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1720242 00:21:58.441 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.441 18:13:58 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:58.441 00:21:58.441 real 0m9.957s 00:21:58.441 user 0m4.174s 00:21:58.441 sys 0m6.565s 00:21:58.441 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.441 18:13:58 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.441 ************************************ 00:21:58.441 END TEST nvmf_async_init 00:21:58.441 ************************************ 00:21:58.441 18:13:58 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:58.441 18:13:58 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:58.441 18:13:58 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.441 18:13:58 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.441 18:13:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:58.441 ************************************ 00:21:58.441 START TEST dma 00:21:58.441 ************************************ 00:21:58.441 18:13:58 nvmf_rdma.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:21:58.441 * Looking for test storage... 00:21:58.441 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:58.441 18:13:58 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.441 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:58.441 18:13:58 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.441 18:13:58 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.442 18:13:58 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.442 18:13:58 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.442 18:13:58 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.442 18:13:58 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.442 18:13:58 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:21:58.442 18:13:58 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.442 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.700 18:13:58 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:21:58.700 18:13:58 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:21:58.700 18:13:58 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:21:58.700 18:13:58 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:21:58.700 18:13:58 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.700 18:13:58 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.700 18:13:58 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.700 18:13:58 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.700 18:13:58 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:06.845 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:06.845 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:06.845 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:06.845 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:06.845 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.105 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:07.106 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.106 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:07.106 altname enp217s0f0np0 00:22:07.106 altname ens818f0np0 00:22:07.106 inet 192.168.100.8/24 scope global mlx_0_0 00:22:07.106 valid_lft forever preferred_lft forever 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:07.106 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.106 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:07.106 altname enp217s0f1np1 00:22:07.106 altname ens818f1np1 00:22:07.106 inet 192.168.100.9/24 scope global mlx_0_1 00:22:07.106 valid_lft forever preferred_lft forever 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:07.106 192.168.100.9' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:07.106 192.168.100.9' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:07.106 192.168.100.9' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:07.106 18:14:07 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=1724548 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:07.106 18:14:07 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 1724548 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@829 -- # '[' -z 1724548 ']' 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.106 18:14:07 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:07.106 [2024-07-15 18:14:07.488402] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:07.106 [2024-07-15 18:14:07.488465] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.366 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.366 [2024-07-15 18:14:07.574493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:07.366 [2024-07-15 18:14:07.643788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.366 [2024-07-15 18:14:07.643833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.366 [2024-07-15 18:14:07.643842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.366 [2024-07-15 18:14:07.643850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.366 [2024-07-15 18:14:07.643873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.366 [2024-07-15 18:14:07.643929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.366 [2024-07-15 18:14:07.643931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.933 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.933 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@862 -- # return 0 00:22:07.933 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.933 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.933 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:07.933 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.933 18:14:08 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:07.933 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.933 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.193 [2024-07-15 18:14:08.356628] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf45640/0xf49b30) succeed. 00:22:08.193 [2024-07-15 18:14:08.365684] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf46af0/0xf8b1c0) succeed. 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.193 18:14:08 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.193 Malloc0 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.193 18:14:08 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.193 18:14:08 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.193 18:14:08 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:08.193 [2024-07-15 18:14:08.516936] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:08.193 18:14:08 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.193 18:14:08 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:22:08.193 18:14:08 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.193 { 00:22:08.193 "params": { 00:22:08.193 "name": "Nvme$subsystem", 00:22:08.193 "trtype": "$TEST_TRANSPORT", 00:22:08.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.193 "adrfam": "ipv4", 00:22:08.193 "trsvcid": "$NVMF_PORT", 00:22:08.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.193 "hdgst": ${hdgst:-false}, 00:22:08.193 "ddgst": ${ddgst:-false} 00:22:08.193 }, 00:22:08.193 "method": "bdev_nvme_attach_controller" 00:22:08.193 } 00:22:08.193 EOF 00:22:08.193 )") 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:22:08.193 18:14:08 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:08.193 "params": { 00:22:08.193 "name": "Nvme0", 00:22:08.193 "trtype": "rdma", 00:22:08.193 "traddr": "192.168.100.8", 00:22:08.193 "adrfam": "ipv4", 00:22:08.193 "trsvcid": "4420", 00:22:08.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:08.193 "hdgst": false, 00:22:08.193 "ddgst": false 00:22:08.193 }, 00:22:08.193 "method": "bdev_nvme_attach_controller" 00:22:08.193 }' 00:22:08.193 [2024-07-15 18:14:08.568810] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:08.193 [2024-07-15 18:14:08.568864] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724835 ] 00:22:08.452 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.452 [2024-07-15 18:14:08.650507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:08.452 [2024-07-15 18:14:08.726567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.452 [2024-07-15 18:14:08.726570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.728 bdev Nvme0n1 reports 1 memory domains 00:22:13.728 bdev Nvme0n1 supports RDMA memory domain 00:22:13.728 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:13.728 ========================================================================== 00:22:13.728 Latency [us] 00:22:13.728 IOPS MiB/s Average min max 00:22:13.728 Core 2: 22287.59 87.06 717.17 245.46 8818.99 00:22:13.728 Core 3: 22437.35 87.65 712.36 235.95 8531.25 00:22:13.728 ========================================================================== 00:22:13.728 Total : 44724.94 174.71 714.76 235.95 8818.99 00:22:13.728 00:22:13.728 Total operations: 223677, translate 223677 pull_push 0 memzero 0 00:22:13.987 18:14:14 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:22:13.987 18:14:14 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:22:13.987 18:14:14 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:22:13.987 [2024-07-15 18:14:14.175665] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:13.987 [2024-07-15 18:14:14.175729] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1725712 ] 00:22:13.987 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.987 [2024-07-15 18:14:14.257302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:13.987 [2024-07-15 18:14:14.324602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.987 [2024-07-15 18:14:14.324604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.256 bdev Malloc0 reports 2 memory domains 00:22:19.256 bdev Malloc0 doesn't support RDMA memory domain 00:22:19.256 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:19.256 ========================================================================== 00:22:19.256 Latency [us] 00:22:19.256 IOPS MiB/s Average min max 00:22:19.256 Core 2: 14706.64 57.45 1087.23 383.70 1381.89 00:22:19.256 Core 3: 14869.18 58.08 1075.32 419.82 1969.61 00:22:19.256 ========================================================================== 00:22:19.256 Total : 29575.81 115.53 1081.25 383.70 1969.61 00:22:19.256 00:22:19.256 Total operations: 147931, translate 0 pull_push 591724 memzero 0 00:22:19.256 18:14:19 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:22:19.256 18:14:19 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:22:19.256 18:14:19 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:19.256 18:14:19 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:22:19.515 Ignoring -M option 00:22:19.515 [2024-07-15 18:14:19.674130] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:19.515 [2024-07-15 18:14:19.674187] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726689 ] 00:22:19.515 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.515 [2024-07-15 18:14:19.753415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:19.515 [2024-07-15 18:14:19.824093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.515 [2024-07-15 18:14:19.824096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.084 bdev e215d7b8-a7b9-45bf-a8d1-49320f68af12 reports 1 memory domains 00:22:26.084 bdev e215d7b8-a7b9-45bf-a8d1-49320f68af12 supports RDMA memory domain 00:22:26.084 Initialization complete, running randread IO for 5 sec on 2 cores 00:22:26.084 ========================================================================== 00:22:26.084 Latency [us] 00:22:26.084 IOPS MiB/s Average min max 00:22:26.084 Core 2: 77781.07 303.83 205.00 86.06 3821.75 00:22:26.084 Core 3: 80987.48 316.36 196.79 79.54 2261.72 00:22:26.084 ========================================================================== 00:22:26.084 Total : 158768.55 620.19 200.81 79.54 3821.75 00:22:26.084 00:22:26.084 Total operations: 793938, translate 0 pull_push 0 memzero 793938 00:22:26.084 18:14:25 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:22:26.084 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.084 [2024-07-15 18:14:25.375411] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:27.461 Initializing NVMe Controllers 00:22:27.461 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:22:27.461 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:22:27.461 Initialization complete. Launching workers. 00:22:27.461 ======================================================== 00:22:27.461 Latency(us) 00:22:27.461 Device Information : IOPS MiB/s Average min max 00:22:27.461 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2032.00 7.94 7932.98 4981.31 9975.52 00:22:27.461 ======================================================== 00:22:27.461 Total : 2032.00 7.94 7932.98 4981.31 9975.52 00:22:27.461 00:22:27.461 18:14:27 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:22:27.461 18:14:27 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:22:27.461 18:14:27 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:27.461 18:14:27 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:22:27.461 [2024-07-15 18:14:27.710910] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:27.461 [2024-07-15 18:14:27.710964] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1728022 ] 00:22:27.461 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.461 [2024-07-15 18:14:27.789298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:27.720 [2024-07-15 18:14:27.860787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.720 [2024-07-15 18:14:27.860791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.999 bdev 92c49891-3c92-4c20-8fc5-5d1541d45b97 reports 1 memory domains 00:22:32.999 bdev 92c49891-3c92-4c20-8fc5-5d1541d45b97 supports RDMA memory domain 00:22:32.999 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:32.999 ========================================================================== 00:22:32.999 Latency [us] 00:22:32.999 IOPS MiB/s Average min max 00:22:32.999 Core 2: 19637.32 76.71 814.07 50.16 8269.97 00:22:32.999 Core 3: 19874.08 77.63 804.37 10.30 8364.65 00:22:32.999 ========================================================================== 00:22:32.999 Total : 39511.39 154.34 809.19 10.30 8364.65 00:22:32.999 00:22:32.999 Total operations: 197590, translate 197483 pull_push 0 memzero 107 00:22:32.999 18:14:33 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:22:32.999 18:14:33 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:32.999 rmmod nvme_rdma 00:22:32.999 rmmod nvme_fabrics 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 1724548 ']' 00:22:32.999 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 1724548 00:22:32.999 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # '[' -z 1724548 ']' 00:22:32.999 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # kill -0 1724548 00:22:32.999 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # uname 00:22:32.999 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.999 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1724548 00:22:33.259 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:33.259 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:33.259 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1724548' 00:22:33.259 killing process with pid 1724548 00:22:33.259 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # kill 1724548 00:22:33.259 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@972 -- # wait 1724548 00:22:33.587 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.587 18:14:33 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:33.587 00:22:33.587 real 0m35.023s 00:22:33.587 user 1m37.372s 00:22:33.587 sys 0m7.834s 00:22:33.587 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:33.587 18:14:33 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:33.587 ************************************ 00:22:33.587 END TEST dma 00:22:33.587 ************************************ 00:22:33.587 18:14:33 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:33.587 18:14:33 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:22:33.587 18:14:33 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:33.587 18:14:33 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.587 18:14:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:33.587 ************************************ 00:22:33.587 START TEST nvmf_identify 00:22:33.587 ************************************ 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:22:33.587 * Looking for test storage... 00:22:33.587 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.587 18:14:33 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:41.712 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:41.712 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:41.712 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:41.712 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:41.712 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:41.713 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.713 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:41.713 altname enp217s0f0np0 00:22:41.713 altname ens818f0np0 00:22:41.713 inet 192.168.100.8/24 scope global mlx_0_0 00:22:41.713 valid_lft forever preferred_lft forever 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:41.713 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:41.713 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:41.713 altname enp217s0f1np1 00:22:41.713 altname ens818f1np1 00:22:41.713 inet 192.168.100.9/24 scope global mlx_0_1 00:22:41.713 valid_lft forever preferred_lft forever 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:41.713 192.168.100.9' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:41.713 192.168.100.9' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:41.713 192.168.100.9' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1732963 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1732963 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1732963 ']' 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:41.713 18:14:41 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:41.713 [2024-07-15 18:14:41.791564] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:41.713 [2024-07-15 18:14:41.791619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.713 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.713 [2024-07-15 18:14:41.875893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.713 [2024-07-15 18:14:41.951128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.713 [2024-07-15 18:14:41.951170] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.713 [2024-07-15 18:14:41.951179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.713 [2024-07-15 18:14:41.951188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.713 [2024-07-15 18:14:41.951194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.713 [2024-07-15 18:14:41.951297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.713 [2024-07-15 18:14:41.951413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.713 [2024-07-15 18:14:41.951490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.713 [2024-07-15 18:14:41.951492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.282 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.282 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:42.282 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:42.282 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.282 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.282 [2024-07-15 18:14:42.619556] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2049f80/0x204e470) succeed. 00:22:42.282 [2024-07-15 18:14:42.629055] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x204b5c0/0x208fb00) succeed. 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 Malloc0 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 [2024-07-15 18:14:42.840466] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.541 [ 00:22:42.541 { 00:22:42.541 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:42.541 "subtype": "Discovery", 00:22:42.541 "listen_addresses": [ 00:22:42.541 { 00:22:42.541 "trtype": "RDMA", 00:22:42.541 "adrfam": "IPv4", 00:22:42.541 "traddr": "192.168.100.8", 00:22:42.541 "trsvcid": "4420" 00:22:42.541 } 00:22:42.541 ], 00:22:42.541 "allow_any_host": true, 00:22:42.541 "hosts": [] 00:22:42.541 }, 00:22:42.541 { 00:22:42.541 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.541 "subtype": "NVMe", 00:22:42.541 "listen_addresses": [ 00:22:42.541 { 00:22:42.541 "trtype": "RDMA", 00:22:42.541 "adrfam": "IPv4", 00:22:42.541 "traddr": "192.168.100.8", 00:22:42.541 "trsvcid": "4420" 00:22:42.541 } 00:22:42.541 ], 00:22:42.541 "allow_any_host": true, 00:22:42.541 "hosts": [], 00:22:42.541 "serial_number": "SPDK00000000000001", 00:22:42.541 "model_number": "SPDK bdev Controller", 00:22:42.541 "max_namespaces": 32, 00:22:42.541 "min_cntlid": 1, 00:22:42.541 "max_cntlid": 65519, 00:22:42.541 "namespaces": [ 00:22:42.541 { 00:22:42.541 "nsid": 1, 00:22:42.541 "bdev_name": "Malloc0", 00:22:42.541 "name": "Malloc0", 00:22:42.541 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:42.541 "eui64": "ABCDEF0123456789", 00:22:42.541 "uuid": "c71f9e46-2e65-4518-a07b-f67de5453fe3" 00:22:42.541 } 00:22:42.541 ] 00:22:42.541 } 00:22:42.541 ] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.541 18:14:42 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:42.541 [2024-07-15 18:14:42.901160] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:42.541 [2024-07-15 18:14:42.901201] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733039 ] 00:22:42.541 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.808 [2024-07-15 18:14:42.951132] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:42.808 [2024-07-15 18:14:42.951207] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:42.808 [2024-07-15 18:14:42.951223] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:42.808 [2024-07-15 18:14:42.951228] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:42.808 [2024-07-15 18:14:42.951260] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:42.808 [2024-07-15 18:14:42.960463] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:42.808 [2024-07-15 18:14:42.970538] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:42.808 [2024-07-15 18:14:42.970550] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:42.808 [2024-07-15 18:14:42.970557] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970565] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970571] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970577] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970584] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970590] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970597] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970603] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970609] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970615] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970622] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970628] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970634] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970641] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970647] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970653] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970660] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970666] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970672] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970678] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970685] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970691] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970697] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970704] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970710] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970716] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970722] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970729] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970737] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970744] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970750] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970756] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:42.808 [2024-07-15 18:14:42.970762] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:42.808 [2024-07-15 18:14:42.970766] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:42.808 [2024-07-15 18:14:42.970786] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.970799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180800 00:22:42.808 [2024-07-15 18:14:42.976019] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.808 [2024-07-15 18:14:42.976030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:42.808 [2024-07-15 18:14:42.976038] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976046] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.808 [2024-07-15 18:14:42.976054] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:42.808 [2024-07-15 18:14:42.976061] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:42.808 [2024-07-15 18:14:42.976076] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.808 [2024-07-15 18:14:42.976103] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.808 [2024-07-15 18:14:42.976109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:42.808 [2024-07-15 18:14:42.976116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:42.808 [2024-07-15 18:14:42.976122] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976129] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:42.808 [2024-07-15 18:14:42.976137] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.808 [2024-07-15 18:14:42.976162] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.808 [2024-07-15 18:14:42.976168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:42.808 [2024-07-15 18:14:42.976175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:42.808 [2024-07-15 18:14:42.976181] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.808 [2024-07-15 18:14:42.976196] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.808 [2024-07-15 18:14:42.976227] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.808 [2024-07-15 18:14:42.976232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.808 [2024-07-15 18:14:42.976239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.808 [2024-07-15 18:14:42.976246] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976254] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.808 [2024-07-15 18:14:42.976281] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.808 [2024-07-15 18:14:42.976287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:42.808 [2024-07-15 18:14:42.976293] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:42.808 [2024-07-15 18:14:42.976300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:42.808 [2024-07-15 18:14:42.976306] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.808 [2024-07-15 18:14:42.976419] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:42.808 [2024-07-15 18:14:42.976425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.808 [2024-07-15 18:14:42.976436] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.808 [2024-07-15 18:14:42.976444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.808 [2024-07-15 18:14:42.976463] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.809 [2024-07-15 18:14:42.976482] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976490] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.809 [2024-07-15 18:14:42.976515] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976527] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.809 [2024-07-15 18:14:42.976534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:42.809 [2024-07-15 18:14:42.976542] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976549] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:42.809 [2024-07-15 18:14:42.976558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.809 [2024-07-15 18:14:42.976567] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:22:42.809 [2024-07-15 18:14:42.976620] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976634] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:42.809 [2024-07-15 18:14:42.976640] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:42.809 [2024-07-15 18:14:42.976646] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:42.809 [2024-07-15 18:14:42.976653] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:42.809 [2024-07-15 18:14:42.976659] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:42.809 [2024-07-15 18:14:42.976665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:42.809 [2024-07-15 18:14:42.976671] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:42.809 [2024-07-15 18:14:42.976687] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976695] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.809 [2024-07-15 18:14:42.976714] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976729] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.809 [2024-07-15 18:14:42.976743] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.809 [2024-07-15 18:14:42.976757] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.809 [2024-07-15 18:14:42.976771] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.809 [2024-07-15 18:14:42.976786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:42.809 [2024-07-15 18:14:42.976792] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:42.809 [2024-07-15 18:14:42.976810] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.809 [2024-07-15 18:14:42.976835] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976847] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:42.809 [2024-07-15 18:14:42.976856] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:42.809 [2024-07-15 18:14:42.976862] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976871] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:22:42.809 [2024-07-15 18:14:42.976901] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976914] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976923] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:42.809 [2024-07-15 18:14:42.976945] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180800 00:22:42.809 [2024-07-15 18:14:42.976961] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.976969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.809 [2024-07-15 18:14:42.976980] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.976986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.976997] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.977004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180800 00:22:42.809 [2024-07-15 18:14:42.977016] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.977023] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.977028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.977036] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.977043] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.977048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.977059] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.977066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180800 00:22:42.809 [2024-07-15 18:14:42.977072] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180800 00:22:42.809 [2024-07-15 18:14:42.977092] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.809 [2024-07-15 18:14:42.977097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:42.809 [2024-07-15 18:14:42.977108] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180800 00:22:42.809 ===================================================== 00:22:42.809 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:42.809 ===================================================== 00:22:42.809 Controller Capabilities/Features 00:22:42.809 ================================ 00:22:42.809 Vendor ID: 0000 00:22:42.809 Subsystem Vendor ID: 0000 00:22:42.809 Serial Number: .................... 00:22:42.809 Model Number: ........................................ 00:22:42.809 Firmware Version: 24.09 00:22:42.809 Recommended Arb Burst: 0 00:22:42.809 IEEE OUI Identifier: 00 00 00 00:22:42.809 Multi-path I/O 00:22:42.809 May have multiple subsystem ports: No 00:22:42.809 May have multiple controllers: No 00:22:42.809 Associated with SR-IOV VF: No 00:22:42.809 Max Data Transfer Size: 131072 00:22:42.809 Max Number of Namespaces: 0 00:22:42.809 Max Number of I/O Queues: 1024 00:22:42.809 NVMe Specification Version (VS): 1.3 00:22:42.809 NVMe Specification Version (Identify): 1.3 00:22:42.809 Maximum Queue Entries: 128 00:22:42.809 Contiguous Queues Required: Yes 00:22:42.809 Arbitration Mechanisms Supported 00:22:42.809 Weighted Round Robin: Not Supported 00:22:42.809 Vendor Specific: Not Supported 00:22:42.809 Reset Timeout: 15000 ms 00:22:42.809 Doorbell Stride: 4 bytes 00:22:42.809 NVM Subsystem Reset: Not Supported 00:22:42.809 Command Sets Supported 00:22:42.809 NVM Command Set: Supported 00:22:42.809 Boot Partition: Not Supported 00:22:42.809 Memory Page Size Minimum: 4096 bytes 00:22:42.809 Memory Page Size Maximum: 4096 bytes 00:22:42.809 Persistent Memory Region: Not Supported 00:22:42.809 Optional Asynchronous Events Supported 00:22:42.809 Namespace Attribute Notices: Not Supported 00:22:42.809 Firmware Activation Notices: Not Supported 00:22:42.809 ANA Change Notices: Not Supported 00:22:42.809 PLE Aggregate Log Change Notices: Not Supported 00:22:42.810 LBA Status Info Alert Notices: Not Supported 00:22:42.810 EGE Aggregate Log Change Notices: Not Supported 00:22:42.810 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.810 Zone Descriptor Change Notices: Not Supported 00:22:42.810 Discovery Log Change Notices: Supported 00:22:42.810 Controller Attributes 00:22:42.810 128-bit Host Identifier: Not Supported 00:22:42.810 Non-Operational Permissive Mode: Not Supported 00:22:42.810 NVM Sets: Not Supported 00:22:42.810 Read Recovery Levels: Not Supported 00:22:42.810 Endurance Groups: Not Supported 00:22:42.810 Predictable Latency Mode: Not Supported 00:22:42.810 Traffic Based Keep ALive: Not Supported 00:22:42.810 Namespace Granularity: Not Supported 00:22:42.810 SQ Associations: Not Supported 00:22:42.810 UUID List: Not Supported 00:22:42.810 Multi-Domain Subsystem: Not Supported 00:22:42.810 Fixed Capacity Management: Not Supported 00:22:42.810 Variable Capacity Management: Not Supported 00:22:42.810 Delete Endurance Group: Not Supported 00:22:42.810 Delete NVM Set: Not Supported 00:22:42.810 Extended LBA Formats Supported: Not Supported 00:22:42.810 Flexible Data Placement Supported: Not Supported 00:22:42.810 00:22:42.810 Controller Memory Buffer Support 00:22:42.810 ================================ 00:22:42.810 Supported: No 00:22:42.810 00:22:42.810 Persistent Memory Region Support 00:22:42.810 ================================ 00:22:42.810 Supported: No 00:22:42.810 00:22:42.810 Admin Command Set Attributes 00:22:42.810 ============================ 00:22:42.810 Security Send/Receive: Not Supported 00:22:42.810 Format NVM: Not Supported 00:22:42.810 Firmware Activate/Download: Not Supported 00:22:42.810 Namespace Management: Not Supported 00:22:42.810 Device Self-Test: Not Supported 00:22:42.810 Directives: Not Supported 00:22:42.810 NVMe-MI: Not Supported 00:22:42.810 Virtualization Management: Not Supported 00:22:42.810 Doorbell Buffer Config: Not Supported 00:22:42.810 Get LBA Status Capability: Not Supported 00:22:42.810 Command & Feature Lockdown Capability: Not Supported 00:22:42.810 Abort Command Limit: 1 00:22:42.810 Async Event Request Limit: 4 00:22:42.810 Number of Firmware Slots: N/A 00:22:42.810 Firmware Slot 1 Read-Only: N/A 00:22:42.810 Firmware Activation Without Reset: N/A 00:22:42.810 Multiple Update Detection Support: N/A 00:22:42.810 Firmware Update Granularity: No Information Provided 00:22:42.810 Per-Namespace SMART Log: No 00:22:42.810 Asymmetric Namespace Access Log Page: Not Supported 00:22:42.810 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:42.810 Command Effects Log Page: Not Supported 00:22:42.810 Get Log Page Extended Data: Supported 00:22:42.810 Telemetry Log Pages: Not Supported 00:22:42.810 Persistent Event Log Pages: Not Supported 00:22:42.810 Supported Log Pages Log Page: May Support 00:22:42.810 Commands Supported & Effects Log Page: Not Supported 00:22:42.810 Feature Identifiers & Effects Log Page:May Support 00:22:42.810 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.810 Data Area 4 for Telemetry Log: Not Supported 00:22:42.810 Error Log Page Entries Supported: 128 00:22:42.810 Keep Alive: Not Supported 00:22:42.810 00:22:42.810 NVM Command Set Attributes 00:22:42.810 ========================== 00:22:42.810 Submission Queue Entry Size 00:22:42.810 Max: 1 00:22:42.810 Min: 1 00:22:42.810 Completion Queue Entry Size 00:22:42.810 Max: 1 00:22:42.810 Min: 1 00:22:42.810 Number of Namespaces: 0 00:22:42.810 Compare Command: Not Supported 00:22:42.810 Write Uncorrectable Command: Not Supported 00:22:42.810 Dataset Management Command: Not Supported 00:22:42.810 Write Zeroes Command: Not Supported 00:22:42.810 Set Features Save Field: Not Supported 00:22:42.810 Reservations: Not Supported 00:22:42.810 Timestamp: Not Supported 00:22:42.810 Copy: Not Supported 00:22:42.810 Volatile Write Cache: Not Present 00:22:42.810 Atomic Write Unit (Normal): 1 00:22:42.810 Atomic Write Unit (PFail): 1 00:22:42.810 Atomic Compare & Write Unit: 1 00:22:42.810 Fused Compare & Write: Supported 00:22:42.810 Scatter-Gather List 00:22:42.810 SGL Command Set: Supported 00:22:42.810 SGL Keyed: Supported 00:22:42.810 SGL Bit Bucket Descriptor: Not Supported 00:22:42.810 SGL Metadata Pointer: Not Supported 00:22:42.810 Oversized SGL: Not Supported 00:22:42.810 SGL Metadata Address: Not Supported 00:22:42.810 SGL Offset: Supported 00:22:42.810 Transport SGL Data Block: Not Supported 00:22:42.810 Replay Protected Memory Block: Not Supported 00:22:42.810 00:22:42.810 Firmware Slot Information 00:22:42.810 ========================= 00:22:42.810 Active slot: 0 00:22:42.810 00:22:42.810 00:22:42.810 Error Log 00:22:42.810 ========= 00:22:42.810 00:22:42.810 Active Namespaces 00:22:42.810 ================= 00:22:42.810 Discovery Log Page 00:22:42.810 ================== 00:22:42.810 Generation Counter: 2 00:22:42.810 Number of Records: 2 00:22:42.810 Record Format: 0 00:22:42.810 00:22:42.810 Discovery Log Entry 0 00:22:42.810 ---------------------- 00:22:42.810 Transport Type: 1 (RDMA) 00:22:42.810 Address Family: 1 (IPv4) 00:22:42.810 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:42.810 Entry Flags: 00:22:42.810 Duplicate Returned Information: 1 00:22:42.810 Explicit Persistent Connection Support for Discovery: 1 00:22:42.810 Transport Requirements: 00:22:42.810 Secure Channel: Not Required 00:22:42.810 Port ID: 0 (0x0000) 00:22:42.810 Controller ID: 65535 (0xffff) 00:22:42.810 Admin Max SQ Size: 128 00:22:42.810 Transport Service Identifier: 4420 00:22:42.810 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:42.810 Transport Address: 192.168.100.8 00:22:42.810 Transport Specific Address Subtype - RDMA 00:22:42.810 RDMA QP Service Type: 1 (Reliable Connected) 00:22:42.810 RDMA Provider Type: 1 (No provider specified) 00:22:42.810 RDMA CM Service: 1 (RDMA_CM) 00:22:42.810 Discovery Log Entry 1 00:22:42.810 ---------------------- 00:22:42.810 Transport Type: 1 (RDMA) 00:22:42.810 Address Family: 1 (IPv4) 00:22:42.810 Subsystem Type: 2 (NVM Subsystem) 00:22:42.810 Entry Flags: 00:22:42.810 Duplicate Returned Information: 0 00:22:42.810 Explicit Persistent Connection Support for Discovery: 0 00:22:42.810 Transport Requirements: 00:22:42.810 Secure Channel: Not Required 00:22:42.810 Port ID: 0 (0x0000) 00:22:42.810 Controller ID: 65535 (0xffff) 00:22:42.810 Admin Max SQ Size: [2024-07-15 18:14:42.977181] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:42.810 [2024-07-15 18:14:42.977191] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 53281 doesn't match qid 00:22:42.810 [2024-07-15 18:14:42.977205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:9ad0 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977212] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 53281 doesn't match qid 00:22:42.810 [2024-07-15 18:14:42.977220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:9ad0 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977226] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 53281 doesn't match qid 00:22:42.810 [2024-07-15 18:14:42.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:9ad0 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977241] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 53281 doesn't match qid 00:22:42.810 [2024-07-15 18:14:42.977249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:9ad0 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977258] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.810 [2024-07-15 18:14:42.977283] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.810 [2024-07-15 18:14:42.977289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977298] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.810 [2024-07-15 18:14:42.977312] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977330] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.810 [2024-07-15 18:14:42.977336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977343] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:42.810 [2024-07-15 18:14:42.977349] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:42.810 [2024-07-15 18:14:42.977357] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977366] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.810 [2024-07-15 18:14:42.977393] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.810 [2024-07-15 18:14:42.977399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:42.810 [2024-07-15 18:14:42.977406] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977415] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.810 [2024-07-15 18:14:42.977423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977445] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977458] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977466] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977494] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977507] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977515] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977539] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977552] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977561] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977584] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977596] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977605] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977631] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977645] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977654] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977681] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977693] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977702] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977732] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977744] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977753] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977778] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977791] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977799] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977825] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977837] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977847] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977878] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977890] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977899] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977926] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977940] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977949] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.977972] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.977978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.977984] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.977993] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978023] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978036] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978076] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978088] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978097] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978121] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978133] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978141] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978171] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978183] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978192] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978215] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978229] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978238] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978261] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978273] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978282] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978306] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:42.811 [2024-07-15 18:14:42.978318] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978327] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.811 [2024-07-15 18:14:42.978334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.811 [2024-07-15 18:14:42.978358] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.811 [2024-07-15 18:14:42.978363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978370] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978379] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.812 [2024-07-15 18:14:42.978408] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.812 [2024-07-15 18:14:42.978414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978420] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978429] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.812 [2024-07-15 18:14:42.978452] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.812 [2024-07-15 18:14:42.978458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978464] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978473] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.812 [2024-07-15 18:14:42.978504] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.812 [2024-07-15 18:14:42.978510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978517] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978526] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.812 [2024-07-15 18:14:42.978556] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.812 [2024-07-15 18:14:42.978561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978568] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978576] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.812 [2024-07-15 18:14:42.978606] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.812 [2024-07-15 18:14:42.978612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978618] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978627] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.812 [2024-07-15 18:14:42.978635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.812 [2024-07-15 18:14:42.978652] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.812 [2024-07-15 18:14:42.978658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:42.812 [2024-07-15 18:14:42.978664] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978673] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978702] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.978714] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978723] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978749] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.978761] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978770] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978793] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.978805] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978814] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978843] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.978855] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978864] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978893] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.978906] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978914] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978942] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.978954] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978962] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.978970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.978992] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.978997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.979004] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.979017] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.979025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.979044] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.979050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.979056] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.979065] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.979074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.979092] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.813 [2024-07-15 18:14:42.979098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:42.813 [2024-07-15 18:14:42.979104] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.979113] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.813 [2024-07-15 18:14:42.979121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.813 [2024-07-15 18:14:42.979142] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979154] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979163] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979195] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979207] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979216] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979240] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979252] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979260] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979282] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979294] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979303] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979332] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979344] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979353] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979380] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979392] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979401] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979428] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979440] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979449] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979479] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979491] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979499] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979525] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979537] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979546] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979577] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979589] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979598] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979623] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979635] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979645] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979671] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979683] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979692] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979723] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979735] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979744] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979771] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979792] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979816] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979828] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979836] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979864] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979876] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979885] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979913] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979925] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979935] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.979962] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.979968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.979974] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979983] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.979991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.980008] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.982129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.982136] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.982145] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.814 [2024-07-15 18:14:42.982153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.814 [2024-07-15 18:14:42.982169] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.814 [2024-07-15 18:14:42.982175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:22:42.814 [2024-07-15 18:14:42.982182] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:42.982189] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:22:42.815 128 00:22:42.815 Transport Service Identifier: 4420 00:22:42.815 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:42.815 Transport Address: 192.168.100.8 00:22:42.815 Transport Specific Address Subtype - RDMA 00:22:42.815 RDMA QP Service Type: 1 (Reliable Connected) 00:22:42.815 RDMA Provider Type: 1 (No provider specified) 00:22:42.815 RDMA CM Service: 1 (RDMA_CM) 00:22:42.815 18:14:43 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:42.815 [2024-07-15 18:14:43.057128] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:42.815 [2024-07-15 18:14:43.057170] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733117 ] 00:22:42.815 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.815 [2024-07-15 18:14:43.102512] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:42.815 [2024-07-15 18:14:43.102582] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:22:42.815 [2024-07-15 18:14:43.102604] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:22:42.815 [2024-07-15 18:14:43.102612] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:22:42.815 [2024-07-15 18:14:43.102639] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:42.815 [2024-07-15 18:14:43.120445] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:22:42.815 [2024-07-15 18:14:43.134516] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:42.815 [2024-07-15 18:14:43.134527] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:22:42.815 [2024-07-15 18:14:43.134534] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134542] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134548] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134555] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134561] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134567] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134574] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134580] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134586] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134593] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134599] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134605] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134612] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134618] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134624] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134630] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134637] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134643] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134649] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134656] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134662] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134668] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134675] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134681] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134687] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134694] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134700] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134709] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134715] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134722] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134728] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134734] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:22:42.815 [2024-07-15 18:14:43.134739] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:22:42.815 [2024-07-15 18:14:43.134744] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:22:42.815 [2024-07-15 18:14:43.134759] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.134771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180800 00:22:42.815 [2024-07-15 18:14:43.140018] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.815 [2024-07-15 18:14:43.140028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:42.815 [2024-07-15 18:14:43.140035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140043] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:42.815 [2024-07-15 18:14:43.140050] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:42.815 [2024-07-15 18:14:43.140056] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:42.815 [2024-07-15 18:14:43.140070] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.815 [2024-07-15 18:14:43.140096] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.815 [2024-07-15 18:14:43.140102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:22:42.815 [2024-07-15 18:14:43.140110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:42.815 [2024-07-15 18:14:43.140116] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140123] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:42.815 [2024-07-15 18:14:43.140131] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.815 [2024-07-15 18:14:43.140157] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.815 [2024-07-15 18:14:43.140162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:22:42.815 [2024-07-15 18:14:43.140169] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:42.815 [2024-07-15 18:14:43.140175] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:42.815 [2024-07-15 18:14:43.140190] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.815 [2024-07-15 18:14:43.140216] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.815 [2024-07-15 18:14:43.140221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.815 [2024-07-15 18:14:43.140228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:42.815 [2024-07-15 18:14:43.140234] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140243] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.815 [2024-07-15 18:14:43.140267] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.815 [2024-07-15 18:14:43.140272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:42.815 [2024-07-15 18:14:43.140279] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:42.815 [2024-07-15 18:14:43.140285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:42.815 [2024-07-15 18:14:43.140291] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:42.815 [2024-07-15 18:14:43.140404] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:42.815 [2024-07-15 18:14:43.140409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:42.815 [2024-07-15 18:14:43.140418] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.815 [2024-07-15 18:14:43.140426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.815 [2024-07-15 18:14:43.140442] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.815 [2024-07-15 18:14:43.140448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.815 [2024-07-15 18:14:43.140454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:42.816 [2024-07-15 18:14:43.140460] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140469] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.816 [2024-07-15 18:14:43.140493] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.140498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.140505] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:42.816 [2024-07-15 18:14:43.140510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140517] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140527] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:42.816 [2024-07-15 18:14:43.140536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140545] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:22:42.816 [2024-07-15 18:14:43.140590] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.140596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.140605] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:42.816 [2024-07-15 18:14:43.140611] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:42.816 [2024-07-15 18:14:43.140616] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:42.816 [2024-07-15 18:14:43.140622] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:42.816 [2024-07-15 18:14:43.140628] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:42.816 [2024-07-15 18:14:43.140634] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140640] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140655] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.816 [2024-07-15 18:14:43.140687] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.140701] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.816 [2024-07-15 18:14:43.140716] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.816 [2024-07-15 18:14:43.140730] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.816 [2024-07-15 18:14:43.140744] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.816 [2024-07-15 18:14:43.140757] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140764] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140782] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.816 [2024-07-15 18:14:43.140814] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.140820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.140826] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:42.816 [2024-07-15 18:14:43.140835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140841] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140864] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.816 [2024-07-15 18:14:43.140892] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.140897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.140946] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140952] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.140969] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.140977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180800 00:22:42.816 [2024-07-15 18:14:43.141003] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.141009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.141029] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:42.816 [2024-07-15 18:14:43.141039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141045] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141062] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:22:42.816 [2024-07-15 18:14:43.141102] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.141108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.141121] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141127] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141144] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180800 00:22:42.816 [2024-07-15 18:14:43.141175] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.141181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.141190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141196] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141227] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141241] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:42.816 [2024-07-15 18:14:43.141247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:42.816 [2024-07-15 18:14:43.141253] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:42.816 [2024-07-15 18:14:43.141267] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.816 [2024-07-15 18:14:43.141283] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.816 [2024-07-15 18:14:43.141301] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.141306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:42.816 [2024-07-15 18:14:43.141315] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180800 00:22:42.816 [2024-07-15 18:14:43.141321] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.816 [2024-07-15 18:14:43.141327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141333] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141342] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.817 [2024-07-15 18:14:43.141368] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141380] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141389] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.817 [2024-07-15 18:14:43.141413] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141425] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141434] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.817 [2024-07-15 18:14:43.141462] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141474] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141488] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180800 00:22:42.817 [2024-07-15 18:14:43.141504] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180800 00:22:42.817 [2024-07-15 18:14:43.141520] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180800 00:22:42.817 [2024-07-15 18:14:43.141538] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180800 00:22:42.817 [2024-07-15 18:14:43.141558] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141575] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141582] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141599] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141605] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141618] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180800 00:22:42.817 [2024-07-15 18:14:43.141624] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.817 [2024-07-15 18:14:43.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:42.817 [2024-07-15 18:14:43.141639] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180800 00:22:42.817 ===================================================== 00:22:42.817 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.817 ===================================================== 00:22:42.817 Controller Capabilities/Features 00:22:42.817 ================================ 00:22:42.817 Vendor ID: 8086 00:22:42.817 Subsystem Vendor ID: 8086 00:22:42.817 Serial Number: SPDK00000000000001 00:22:42.817 Model Number: SPDK bdev Controller 00:22:42.817 Firmware Version: 24.09 00:22:42.817 Recommended Arb Burst: 6 00:22:42.817 IEEE OUI Identifier: e4 d2 5c 00:22:42.817 Multi-path I/O 00:22:42.817 May have multiple subsystem ports: Yes 00:22:42.817 May have multiple controllers: Yes 00:22:42.817 Associated with SR-IOV VF: No 00:22:42.817 Max Data Transfer Size: 131072 00:22:42.817 Max Number of Namespaces: 32 00:22:42.817 Max Number of I/O Queues: 127 00:22:42.817 NVMe Specification Version (VS): 1.3 00:22:42.817 NVMe Specification Version (Identify): 1.3 00:22:42.817 Maximum Queue Entries: 128 00:22:42.817 Contiguous Queues Required: Yes 00:22:42.817 Arbitration Mechanisms Supported 00:22:42.817 Weighted Round Robin: Not Supported 00:22:42.817 Vendor Specific: Not Supported 00:22:42.817 Reset Timeout: 15000 ms 00:22:42.817 Doorbell Stride: 4 bytes 00:22:42.817 NVM Subsystem Reset: Not Supported 00:22:42.817 Command Sets Supported 00:22:42.817 NVM Command Set: Supported 00:22:42.817 Boot Partition: Not Supported 00:22:42.817 Memory Page Size Minimum: 4096 bytes 00:22:42.817 Memory Page Size Maximum: 4096 bytes 00:22:42.817 Persistent Memory Region: Not Supported 00:22:42.817 Optional Asynchronous Events Supported 00:22:42.817 Namespace Attribute Notices: Supported 00:22:42.817 Firmware Activation Notices: Not Supported 00:22:42.817 ANA Change Notices: Not Supported 00:22:42.817 PLE Aggregate Log Change Notices: Not Supported 00:22:42.817 LBA Status Info Alert Notices: Not Supported 00:22:42.817 EGE Aggregate Log Change Notices: Not Supported 00:22:42.817 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.817 Zone Descriptor Change Notices: Not Supported 00:22:42.817 Discovery Log Change Notices: Not Supported 00:22:42.817 Controller Attributes 00:22:42.817 128-bit Host Identifier: Supported 00:22:42.817 Non-Operational Permissive Mode: Not Supported 00:22:42.817 NVM Sets: Not Supported 00:22:42.817 Read Recovery Levels: Not Supported 00:22:42.817 Endurance Groups: Not Supported 00:22:42.817 Predictable Latency Mode: Not Supported 00:22:42.817 Traffic Based Keep ALive: Not Supported 00:22:42.817 Namespace Granularity: Not Supported 00:22:42.817 SQ Associations: Not Supported 00:22:42.817 UUID List: Not Supported 00:22:42.817 Multi-Domain Subsystem: Not Supported 00:22:42.817 Fixed Capacity Management: Not Supported 00:22:42.817 Variable Capacity Management: Not Supported 00:22:42.817 Delete Endurance Group: Not Supported 00:22:42.817 Delete NVM Set: Not Supported 00:22:42.817 Extended LBA Formats Supported: Not Supported 00:22:42.817 Flexible Data Placement Supported: Not Supported 00:22:42.817 00:22:42.817 Controller Memory Buffer Support 00:22:42.817 ================================ 00:22:42.817 Supported: No 00:22:42.817 00:22:42.817 Persistent Memory Region Support 00:22:42.817 ================================ 00:22:42.817 Supported: No 00:22:42.817 00:22:42.817 Admin Command Set Attributes 00:22:42.817 ============================ 00:22:42.817 Security Send/Receive: Not Supported 00:22:42.817 Format NVM: Not Supported 00:22:42.817 Firmware Activate/Download: Not Supported 00:22:42.817 Namespace Management: Not Supported 00:22:42.817 Device Self-Test: Not Supported 00:22:42.817 Directives: Not Supported 00:22:42.817 NVMe-MI: Not Supported 00:22:42.817 Virtualization Management: Not Supported 00:22:42.817 Doorbell Buffer Config: Not Supported 00:22:42.817 Get LBA Status Capability: Not Supported 00:22:42.817 Command & Feature Lockdown Capability: Not Supported 00:22:42.817 Abort Command Limit: 4 00:22:42.817 Async Event Request Limit: 4 00:22:42.817 Number of Firmware Slots: N/A 00:22:42.817 Firmware Slot 1 Read-Only: N/A 00:22:42.817 Firmware Activation Without Reset: N/A 00:22:42.817 Multiple Update Detection Support: N/A 00:22:42.817 Firmware Update Granularity: No Information Provided 00:22:42.817 Per-Namespace SMART Log: No 00:22:42.817 Asymmetric Namespace Access Log Page: Not Supported 00:22:42.817 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:42.818 Command Effects Log Page: Supported 00:22:42.818 Get Log Page Extended Data: Supported 00:22:42.818 Telemetry Log Pages: Not Supported 00:22:42.818 Persistent Event Log Pages: Not Supported 00:22:42.818 Supported Log Pages Log Page: May Support 00:22:42.818 Commands Supported & Effects Log Page: Not Supported 00:22:42.818 Feature Identifiers & Effects Log Page:May Support 00:22:42.818 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.818 Data Area 4 for Telemetry Log: Not Supported 00:22:42.818 Error Log Page Entries Supported: 128 00:22:42.818 Keep Alive: Supported 00:22:42.818 Keep Alive Granularity: 10000 ms 00:22:42.818 00:22:42.818 NVM Command Set Attributes 00:22:42.818 ========================== 00:22:42.818 Submission Queue Entry Size 00:22:42.818 Max: 64 00:22:42.818 Min: 64 00:22:42.818 Completion Queue Entry Size 00:22:42.818 Max: 16 00:22:42.818 Min: 16 00:22:42.818 Number of Namespaces: 32 00:22:42.818 Compare Command: Supported 00:22:42.818 Write Uncorrectable Command: Not Supported 00:22:42.818 Dataset Management Command: Supported 00:22:42.818 Write Zeroes Command: Supported 00:22:42.818 Set Features Save Field: Not Supported 00:22:42.818 Reservations: Supported 00:22:42.818 Timestamp: Not Supported 00:22:42.818 Copy: Supported 00:22:42.818 Volatile Write Cache: Present 00:22:42.818 Atomic Write Unit (Normal): 1 00:22:42.818 Atomic Write Unit (PFail): 1 00:22:42.818 Atomic Compare & Write Unit: 1 00:22:42.818 Fused Compare & Write: Supported 00:22:42.818 Scatter-Gather List 00:22:42.818 SGL Command Set: Supported 00:22:42.818 SGL Keyed: Supported 00:22:42.818 SGL Bit Bucket Descriptor: Not Supported 00:22:42.818 SGL Metadata Pointer: Not Supported 00:22:42.818 Oversized SGL: Not Supported 00:22:42.818 SGL Metadata Address: Not Supported 00:22:42.818 SGL Offset: Supported 00:22:42.818 Transport SGL Data Block: Not Supported 00:22:42.818 Replay Protected Memory Block: Not Supported 00:22:42.818 00:22:42.818 Firmware Slot Information 00:22:42.818 ========================= 00:22:42.818 Active slot: 1 00:22:42.818 Slot 1 Firmware Revision: 24.09 00:22:42.818 00:22:42.818 00:22:42.818 Commands Supported and Effects 00:22:42.818 ============================== 00:22:42.818 Admin Commands 00:22:42.818 -------------- 00:22:42.818 Get Log Page (02h): Supported 00:22:42.818 Identify (06h): Supported 00:22:42.818 Abort (08h): Supported 00:22:42.818 Set Features (09h): Supported 00:22:42.818 Get Features (0Ah): Supported 00:22:42.818 Asynchronous Event Request (0Ch): Supported 00:22:42.818 Keep Alive (18h): Supported 00:22:42.818 I/O Commands 00:22:42.818 ------------ 00:22:42.818 Flush (00h): Supported LBA-Change 00:22:42.818 Write (01h): Supported LBA-Change 00:22:42.818 Read (02h): Supported 00:22:42.818 Compare (05h): Supported 00:22:42.818 Write Zeroes (08h): Supported LBA-Change 00:22:42.818 Dataset Management (09h): Supported LBA-Change 00:22:42.818 Copy (19h): Supported LBA-Change 00:22:42.818 00:22:42.818 Error Log 00:22:42.818 ========= 00:22:42.818 00:22:42.818 Arbitration 00:22:42.818 =========== 00:22:42.818 Arbitration Burst: 1 00:22:42.818 00:22:42.818 Power Management 00:22:42.818 ================ 00:22:42.818 Number of Power States: 1 00:22:42.818 Current Power State: Power State #0 00:22:42.818 Power State #0: 00:22:42.818 Max Power: 0.00 W 00:22:42.818 Non-Operational State: Operational 00:22:42.818 Entry Latency: Not Reported 00:22:42.818 Exit Latency: Not Reported 00:22:42.818 Relative Read Throughput: 0 00:22:42.818 Relative Read Latency: 0 00:22:42.818 Relative Write Throughput: 0 00:22:42.818 Relative Write Latency: 0 00:22:42.818 Idle Power: Not Reported 00:22:42.818 Active Power: Not Reported 00:22:42.818 Non-Operational Permissive Mode: Not Supported 00:22:42.818 00:22:42.818 Health Information 00:22:42.818 ================== 00:22:42.818 Critical Warnings: 00:22:42.818 Available Spare Space: OK 00:22:42.818 Temperature: OK 00:22:42.818 Device Reliability: OK 00:22:42.818 Read Only: No 00:22:42.818 Volatile Memory Backup: OK 00:22:42.818 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:42.818 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:42.818 Available Spare: 0% 00:22:42.818 Available Spare Threshold: 0% 00:22:42.818 Life Percentage [2024-07-15 18:14:43.141717] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141725] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.141745] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.141751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141757] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141784] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:42.818 [2024-07-15 18:14:43.141793] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38079 doesn't match qid 00:22:42.818 [2024-07-15 18:14:43.141807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32672 cdw0:5 sqhd:fad0 p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141814] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38079 doesn't match qid 00:22:42.818 [2024-07-15 18:14:43.141822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32672 cdw0:5 sqhd:fad0 p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141829] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38079 doesn't match qid 00:22:42.818 [2024-07-15 18:14:43.141837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32672 cdw0:5 sqhd:fad0 p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141843] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 38079 doesn't match qid 00:22:42.818 [2024-07-15 18:14:43.141851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32672 cdw0:5 sqhd:fad0 p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141859] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.141886] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.141892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141901] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.141916] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141930] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.141943] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:42.818 [2024-07-15 18:14:43.141949] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:42.818 [2024-07-15 18:14:43.141955] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141963] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.141971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.141991] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.141998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.142004] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142017] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.142046] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.142052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.142058] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142067] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.142097] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.142103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.142109] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142118] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.142144] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.142150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.142157] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142165] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.818 [2024-07-15 18:14:43.142173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.818 [2024-07-15 18:14:43.142193] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.818 [2024-07-15 18:14:43.142199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:42.818 [2024-07-15 18:14:43.142205] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142214] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142240] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142253] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142261] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142285] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142298] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142306] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142336] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142348] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142357] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142379] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142392] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142400] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142426] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142438] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142447] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142474] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142486] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142495] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142521] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142533] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142541] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142567] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142579] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142588] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142617] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142629] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142638] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142666] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142678] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142687] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142712] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142724] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142733] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142760] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142772] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142781] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142808] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142820] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142829] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142851] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142863] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142872] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142897] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142909] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142918] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142948] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.142953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.142960] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142969] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.142976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.142998] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.143010] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.143023] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.143033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.143053] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.143058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.143065] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.143074] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.143081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.143101] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.819 [2024-07-15 18:14:43.143107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:22:42.819 [2024-07-15 18:14:43.143113] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.143122] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.819 [2024-07-15 18:14:43.143130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.819 [2024-07-15 18:14:43.143151] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143164] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143173] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143197] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143209] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143217] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143241] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143253] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143262] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143286] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143298] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143308] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143332] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143344] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143353] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143382] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143394] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143403] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143434] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143447] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143455] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143483] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143495] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143504] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143535] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143547] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143556] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143583] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143597] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143606] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143629] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143650] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143682] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143694] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143702] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143726] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143738] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143747] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143770] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143782] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143791] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143819] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143831] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143839] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143863] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143876] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143885] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143915] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143927] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143935] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.143959] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.143965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.143971] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143980] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.143988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.144005] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.148017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.148026] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.148035] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180800 00:22:42.820 [2024-07-15 18:14:43.148043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:22:42.820 [2024-07-15 18:14:43.148062] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:22:42.820 [2024-07-15 18:14:43.148067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:22:42.820 [2024-07-15 18:14:43.148074] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180800 00:22:42.821 [2024-07-15 18:14:43.148080] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:42.821 Used: 0% 00:22:42.821 Data Units Read: 0 00:22:42.821 Data Units Written: 0 00:22:42.821 Host Read Commands: 0 00:22:42.821 Host Write Commands: 0 00:22:42.821 Controller Busy Time: 0 minutes 00:22:42.821 Power Cycles: 0 00:22:42.821 Power On Hours: 0 hours 00:22:42.821 Unsafe Shutdowns: 0 00:22:42.821 Unrecoverable Media Errors: 0 00:22:42.821 Lifetime Error Log Entries: 0 00:22:42.821 Warning Temperature Time: 0 minutes 00:22:42.821 Critical Temperature Time: 0 minutes 00:22:42.821 00:22:42.821 Number of Queues 00:22:42.821 ================ 00:22:42.821 Number of I/O Submission Queues: 127 00:22:42.821 Number of I/O Completion Queues: 127 00:22:42.821 00:22:42.821 Active Namespaces 00:22:42.821 ================= 00:22:42.821 Namespace ID:1 00:22:42.821 Error Recovery Timeout: Unlimited 00:22:42.821 Command Set Identifier: NVM (00h) 00:22:42.821 Deallocate: Supported 00:22:42.821 Deallocated/Unwritten Error: Not Supported 00:22:42.821 Deallocated Read Value: Unknown 00:22:42.821 Deallocate in Write Zeroes: Not Supported 00:22:42.821 Deallocated Guard Field: 0xFFFF 00:22:42.821 Flush: Supported 00:22:42.821 Reservation: Supported 00:22:42.821 Namespace Sharing Capabilities: Multiple Controllers 00:22:42.821 Size (in LBAs): 131072 (0GiB) 00:22:42.821 Capacity (in LBAs): 131072 (0GiB) 00:22:42.821 Utilization (in LBAs): 131072 (0GiB) 00:22:42.821 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:42.821 EUI64: ABCDEF0123456789 00:22:42.821 UUID: c71f9e46-2e65-4518-a07b-f67de5453fe3 00:22:42.821 Thin Provisioning: Not Supported 00:22:42.821 Per-NS Atomic Units: Yes 00:22:42.821 Atomic Boundary Size (Normal): 0 00:22:42.821 Atomic Boundary Size (PFail): 0 00:22:42.821 Atomic Boundary Offset: 0 00:22:42.821 Maximum Single Source Range Length: 65535 00:22:42.821 Maximum Copy Length: 65535 00:22:42.821 Maximum Source Range Count: 1 00:22:42.821 NGUID/EUI64 Never Reused: No 00:22:42.821 Namespace Write Protected: No 00:22:42.821 Number of LBA Formats: 1 00:22:42.821 Current LBA Format: LBA Format #00 00:22:42.821 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:42.821 00:22:42.821 18:14:43 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:42.821 18:14:43 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.821 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.821 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:43.092 rmmod nvme_rdma 00:22:43.092 rmmod nvme_fabrics 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1732963 ']' 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1732963 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1732963 ']' 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1732963 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1732963 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1732963' 00:22:43.092 killing process with pid 1732963 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1732963 00:22:43.092 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1732963 00:22:43.351 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:43.351 18:14:43 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:43.351 00:22:43.351 real 0m9.780s 00:22:43.351 user 0m8.588s 00:22:43.351 sys 0m6.504s 00:22:43.351 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:43.351 18:14:43 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:43.351 ************************************ 00:22:43.351 END TEST nvmf_identify 00:22:43.351 ************************************ 00:22:43.351 18:14:43 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:43.351 18:14:43 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:43.351 18:14:43 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:43.351 18:14:43 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:43.351 18:14:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:43.351 ************************************ 00:22:43.351 START TEST nvmf_perf 00:22:43.351 ************************************ 00:22:43.351 18:14:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:22:43.611 * Looking for test storage... 00:22:43.611 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:43.611 18:14:43 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:51.733 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:51.733 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:51.733 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:51.733 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:51.733 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:51.734 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:51.734 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:51.734 altname enp217s0f0np0 00:22:51.734 altname ens818f0np0 00:22:51.734 inet 192.168.100.8/24 scope global mlx_0_0 00:22:51.734 valid_lft forever preferred_lft forever 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:51.734 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:51.734 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:51.734 altname enp217s0f1np1 00:22:51.734 altname ens818f1np1 00:22:51.734 inet 192.168.100.9/24 scope global mlx_0_1 00:22:51.734 valid_lft forever preferred_lft forever 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:51.734 192.168.100.9' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:51.734 192.168.100.9' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:51.734 192.168.100.9' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1737181 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1737181 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1737181 ']' 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.734 18:14:51 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.734 [2024-07-15 18:14:52.018547] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:22:51.734 [2024-07-15 18:14:52.018598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.734 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.734 [2024-07-15 18:14:52.103362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:51.991 [2024-07-15 18:14:52.175449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.991 [2024-07-15 18:14:52.175490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.992 [2024-07-15 18:14:52.175499] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.992 [2024-07-15 18:14:52.175507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.992 [2024-07-15 18:14:52.175531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.992 [2024-07-15 18:14:52.175590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.992 [2024-07-15 18:14:52.175686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.992 [2024-07-15 18:14:52.175790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.992 [2024-07-15 18:14:52.175792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:52.558 18:14:52 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:55.847 18:14:55 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:55.847 18:14:55 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:55.847 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:22:55.847 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:56.105 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:56.105 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:22:56.105 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:56.105 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:22:56.105 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:22:56.105 [2024-07-15 18:14:56.462626] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:22:56.105 [2024-07-15 18:14:56.484945] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5f93d0/0x727080) succeed. 00:22:56.105 [2024-07-15 18:14:56.495172] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5faa10/0x606f80) succeed. 00:22:56.363 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.621 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.621 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.621 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:56.621 18:14:56 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:56.880 18:14:57 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:57.138 [2024-07-15 18:14:57.319592] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:57.138 18:14:57 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:57.138 18:14:57 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:22:57.138 18:14:57 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:22:57.138 18:14:57 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:57.139 18:14:57 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:22:58.515 Initializing NVMe Controllers 00:22:58.515 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:22:58.515 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:22:58.515 Initialization complete. Launching workers. 00:22:58.515 ======================================================== 00:22:58.515 Latency(us) 00:22:58.515 Device Information : IOPS MiB/s Average min max 00:22:58.515 PCIE (0000:d8:00.0) NSID 1 from core 0: 102071.39 398.72 313.05 29.68 4312.75 00:22:58.515 ======================================================== 00:22:58.515 Total : 102071.39 398.72 313.05 29.68 4312.75 00:22:58.515 00:22:58.515 18:14:58 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:58.515 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.858 Initializing NVMe Controllers 00:23:01.858 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:01.858 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:01.858 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:01.858 Initialization complete. Launching workers. 00:23:01.858 ======================================================== 00:23:01.858 Latency(us) 00:23:01.858 Device Information : IOPS MiB/s Average min max 00:23:01.858 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6833.00 26.69 146.15 48.52 4158.72 00:23:01.858 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5303.00 20.71 187.62 74.10 4262.99 00:23:01.858 ======================================================== 00:23:01.858 Total : 12136.00 47.41 164.27 48.52 4262.99 00:23:01.858 00:23:01.858 18:15:02 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:01.858 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.049 Initializing NVMe Controllers 00:23:06.049 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.049 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.049 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:06.049 Initialization complete. Launching workers. 00:23:06.050 ======================================================== 00:23:06.050 Latency(us) 00:23:06.050 Device Information : IOPS MiB/s Average min max 00:23:06.050 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18632.98 72.79 1717.62 495.56 6051.56 00:23:06.050 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7987.05 7738.07 8931.92 00:23:06.050 ======================================================== 00:23:06.050 Total : 22664.98 88.54 2832.92 495.56 8931.92 00:23:06.050 00:23:06.050 18:15:05 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:23:06.050 18:15:05 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:06.050 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.243 Initializing NVMe Controllers 00:23:10.243 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.243 Controller IO queue size 128, less than required. 00:23:10.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.243 Controller IO queue size 128, less than required. 00:23:10.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.243 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:10.243 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:10.243 Initialization complete. Launching workers. 00:23:10.243 ======================================================== 00:23:10.243 Latency(us) 00:23:10.243 Device Information : IOPS MiB/s Average min max 00:23:10.243 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4070.40 1017.60 31471.96 11048.33 67945.84 00:23:10.243 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4122.31 1030.58 30878.99 11937.41 53019.65 00:23:10.243 ======================================================== 00:23:10.243 Total : 8192.71 2048.18 31173.60 11048.33 67945.84 00:23:10.243 00:23:10.243 18:15:09 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:23:10.243 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.243 No valid NVMe controllers or AIO or URING devices found 00:23:10.243 Initializing NVMe Controllers 00:23:10.243 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:10.243 Controller IO queue size 128, less than required. 00:23:10.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.243 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:10.243 Controller IO queue size 128, less than required. 00:23:10.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:10.243 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:10.243 WARNING: Some requested NVMe devices were skipped 00:23:10.243 18:15:10 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:23:10.243 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.435 Initializing NVMe Controllers 00:23:14.435 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.435 Controller IO queue size 128, less than required. 00:23:14.435 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.435 Controller IO queue size 128, less than required. 00:23:14.435 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:14.435 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.435 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:14.435 Initialization complete. Launching workers. 00:23:14.435 00:23:14.435 ==================== 00:23:14.435 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:14.435 RDMA transport: 00:23:14.435 dev name: mlx5_0 00:23:14.435 polls: 413017 00:23:14.435 idle_polls: 409289 00:23:14.435 completions: 45534 00:23:14.435 queued_requests: 1 00:23:14.435 total_send_wrs: 22767 00:23:14.435 send_doorbell_updates: 3481 00:23:14.435 total_recv_wrs: 22894 00:23:14.435 recv_doorbell_updates: 3484 00:23:14.435 --------------------------------- 00:23:14.435 00:23:14.435 ==================== 00:23:14.435 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:14.435 RDMA transport: 00:23:14.435 dev name: mlx5_0 00:23:14.435 polls: 415592 00:23:14.435 idle_polls: 415306 00:23:14.435 completions: 20442 00:23:14.435 queued_requests: 1 00:23:14.435 total_send_wrs: 10221 00:23:14.435 send_doorbell_updates: 260 00:23:14.435 total_recv_wrs: 10348 00:23:14.435 recv_doorbell_updates: 261 00:23:14.435 --------------------------------- 00:23:14.435 ======================================================== 00:23:14.435 Latency(us) 00:23:14.435 Device Information : IOPS MiB/s Average min max 00:23:14.435 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5691.47 1422.87 22556.09 11193.27 55700.80 00:23:14.435 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2554.99 638.75 50105.48 28792.19 79213.14 00:23:14.435 ======================================================== 00:23:14.435 Total : 8246.46 2061.62 31091.67 11193.27 79213.14 00:23:14.435 00:23:14.435 18:15:14 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:14.435 18:15:14 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:14.694 rmmod nvme_rdma 00:23:14.694 rmmod nvme_fabrics 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1737181 ']' 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1737181 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1737181 ']' 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1737181 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.694 18:15:14 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1737181 00:23:14.694 18:15:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.694 18:15:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.694 18:15:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1737181' 00:23:14.694 killing process with pid 1737181 00:23:14.694 18:15:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1737181 00:23:14.694 18:15:15 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1737181 00:23:17.225 18:15:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.225 18:15:17 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:17.225 00:23:17.225 real 0m33.864s 00:23:17.225 user 1m44.075s 00:23:17.225 sys 0m7.516s 00:23:17.225 18:15:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.225 18:15:17 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:17.225 ************************************ 00:23:17.225 END TEST nvmf_perf 00:23:17.225 ************************************ 00:23:17.225 18:15:17 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:17.225 18:15:17 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:17.225 18:15:17 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:17.225 18:15:17 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.225 18:15:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:17.485 ************************************ 00:23:17.485 START TEST nvmf_fio_host 00:23:17.485 ************************************ 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:17.485 * Looking for test storage... 00:23:17.485 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.485 18:15:17 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.486 18:15:17 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:25.603 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:25.603 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:25.603 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:25.604 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:25.604 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:25.604 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:25.604 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:25.604 altname enp217s0f0np0 00:23:25.604 altname ens818f0np0 00:23:25.604 inet 192.168.100.8/24 scope global mlx_0_0 00:23:25.604 valid_lft forever preferred_lft forever 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:25.604 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:25.604 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:25.604 altname enp217s0f1np1 00:23:25.604 altname ens818f1np1 00:23:25.604 inet 192.168.100.9/24 scope global mlx_0_1 00:23:25.604 valid_lft forever preferred_lft forever 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:25.604 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:25.605 192.168.100.9' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:25.605 192.168.100.9' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:25.605 192.168.100.9' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1745960 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1745960 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1745960 ']' 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.605 18:15:25 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.605 [2024-07-15 18:15:25.746489] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:23:25.605 [2024-07-15 18:15:25.746540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.605 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.605 [2024-07-15 18:15:25.831841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.605 [2024-07-15 18:15:25.906325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.605 [2024-07-15 18:15:25.906368] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.605 [2024-07-15 18:15:25.906377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.605 [2024-07-15 18:15:25.906386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.605 [2024-07-15 18:15:25.906392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.605 [2024-07-15 18:15:25.906452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.605 [2024-07-15 18:15:25.906548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.605 [2024-07-15 18:15:25.906632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.605 [2024-07-15 18:15:25.906634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.173 18:15:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.173 18:15:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:26.173 18:15:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:26.432 [2024-07-15 18:15:26.736577] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7f9f80/0x7fe470) succeed. 00:23:26.432 [2024-07-15 18:15:26.746019] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7fb5c0/0x83fb00) succeed. 00:23:26.691 18:15:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:26.691 18:15:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.691 18:15:26 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.691 18:15:26 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:26.950 Malloc1 00:23:26.950 18:15:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.950 18:15:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:27.221 18:15:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:27.522 [2024-07-15 18:15:27.655250] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:27.522 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:27.793 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:27.793 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:27.793 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:27.793 18:15:27 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:23:28.052 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:28.052 fio-3.35 00:23:28.052 Starting 1 thread 00:23:28.052 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.580 00:23:30.580 test: (groupid=0, jobs=1): err= 0: pid=1746646: Mon Jul 15 18:15:30 2024 00:23:30.580 read: IOPS=17.9k, BW=69.9MiB/s (73.3MB/s)(140MiB/2004msec) 00:23:30.580 slat (nsec): min=1357, max=40006, avg=1492.36, stdev=495.47 00:23:30.580 clat (usec): min=1954, max=6382, avg=3547.21, stdev=93.13 00:23:30.580 lat (usec): min=1976, max=6383, avg=3548.70, stdev=93.06 00:23:30.580 clat percentiles (usec): 00:23:30.580 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:23:30.580 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3556], 60.00th=[ 3556], 00:23:30.580 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3556], 00:23:30.580 | 99.00th=[ 3589], 99.50th=[ 3916], 99.90th=[ 4883], 99.95th=[ 5407], 00:23:30.580 | 99.99th=[ 6325] 00:23:30.580 bw ( KiB/s): min=70216, max=72520, per=100.00%, avg=71650.00, stdev=1007.14, samples=4 00:23:30.580 iops : min=17554, max=18130, avg=17912.50, stdev=251.78, samples=4 00:23:30.580 write: IOPS=17.9k, BW=70.0MiB/s (73.4MB/s)(140MiB/2004msec); 0 zone resets 00:23:30.580 slat (nsec): min=1403, max=18709, avg=1577.01, stdev=476.40 00:23:30.580 clat (usec): min=2663, max=6394, avg=3546.45, stdev=101.73 00:23:30.580 lat (usec): min=2669, max=6396, avg=3548.03, stdev=101.68 00:23:30.580 clat percentiles (usec): 00:23:30.580 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3523], 20.00th=[ 3523], 00:23:30.580 | 30.00th=[ 3523], 40.00th=[ 3523], 50.00th=[ 3556], 60.00th=[ 3556], 00:23:30.580 | 70.00th=[ 3556], 80.00th=[ 3556], 90.00th=[ 3556], 95.00th=[ 3556], 00:23:30.580 | 99.00th=[ 3589], 99.50th=[ 3884], 99.90th=[ 5014], 99.95th=[ 5932], 00:23:30.580 | 99.99th=[ 6390] 00:23:30.580 bw ( KiB/s): min=70136, max=72416, per=100.00%, avg=71664.00, stdev=1036.77, samples=4 00:23:30.580 iops : min=17534, max=18104, avg=17916.00, stdev=259.19, samples=4 00:23:30.580 lat (msec) : 2=0.01%, 4=99.57%, 10=0.42% 00:23:30.580 cpu : usr=99.55%, sys=0.05%, ctx=17, majf=0, minf=4 00:23:30.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.580 issued rwts: total=35880,35904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.580 00:23:30.580 Run status group 0 (all jobs): 00:23:30.580 READ: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=140MiB (147MB), run=2004-2004msec 00:23:30.580 WRITE: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=140MiB (147MB), run=2004-2004msec 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:30.580 18:15:30 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:23:30.580 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:30.580 fio-3.35 00:23:30.580 Starting 1 thread 00:23:30.580 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.107 00:23:33.107 test: (groupid=0, jobs=1): err= 0: pid=1747295: Mon Jul 15 18:15:33 2024 00:23:33.107 read: IOPS=14.5k, BW=227MiB/s (238MB/s)(447MiB/1970msec) 00:23:33.107 slat (nsec): min=2242, max=54473, avg=2587.39, stdev=1006.79 00:23:33.107 clat (usec): min=475, max=10403, avg=1769.71, stdev=1498.25 00:23:33.107 lat (usec): min=477, max=10412, avg=1772.29, stdev=1498.63 00:23:33.107 clat percentiles (usec): 00:23:33.107 | 1.00th=[ 676], 5.00th=[ 766], 10.00th=[ 832], 20.00th=[ 914], 00:23:33.107 | 30.00th=[ 979], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1287], 00:23:33.107 | 70.00th=[ 1434], 80.00th=[ 1713], 90.00th=[ 4817], 95.00th=[ 4883], 00:23:33.107 | 99.00th=[ 6718], 99.50th=[ 7308], 99.90th=[ 9503], 99.95th=[ 9765], 00:23:33.107 | 99.99th=[10028] 00:23:33.108 bw ( KiB/s): min=113312, max=116096, per=49.27%, avg=114496.00, stdev=1339.17, samples=4 00:23:33.108 iops : min= 7082, max= 7256, avg=7156.00, stdev=83.70, samples=4 00:23:33.108 write: IOPS=8441, BW=132MiB/s (138MB/s)(233MiB/1765msec); 0 zone resets 00:23:33.108 slat (usec): min=26, max=133, avg=29.06, stdev= 7.71 00:23:33.108 clat (usec): min=3021, max=19647, avg=12284.86, stdev=1941.78 00:23:33.108 lat (usec): min=3064, max=19674, avg=12313.92, stdev=1940.06 00:23:33.108 clat percentiles (usec): 00:23:33.108 | 1.00th=[ 5604], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10945], 00:23:33.108 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:23:33.108 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15270], 00:23:33.108 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19006], 99.95th=[19530], 00:23:33.108 | 99.99th=[19530] 00:23:33.108 bw ( KiB/s): min=115200, max=120704, per=87.66%, avg=118392.00, stdev=2437.66, samples=4 00:23:33.108 iops : min= 7200, max= 7544, avg=7399.50, stdev=152.35, samples=4 00:23:33.108 lat (usec) : 500=0.01%, 750=2.50%, 1000=19.50% 00:23:33.108 lat (msec) : 2=32.18%, 4=2.38%, 10=12.13%, 20=31.31% 00:23:33.108 cpu : usr=96.66%, sys=1.65%, ctx=186, majf=0, minf=3 00:23:33.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:33.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:33.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:33.108 issued rwts: total=28610,14899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:33.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:33.108 00:23:33.108 Run status group 0 (all jobs): 00:23:33.108 READ: bw=227MiB/s (238MB/s), 227MiB/s-227MiB/s (238MB/s-238MB/s), io=447MiB (469MB), run=1970-1970msec 00:23:33.108 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=233MiB (244MB), run=1765-1765msec 00:23:33.108 18:15:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.108 18:15:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:33.108 18:15:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:33.108 18:15:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:33.366 rmmod nvme_rdma 00:23:33.366 rmmod nvme_fabrics 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1745960 ']' 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1745960 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1745960 ']' 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1745960 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1745960 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1745960' 00:23:33.366 killing process with pid 1745960 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1745960 00:23:33.366 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1745960 00:23:33.625 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.625 18:15:33 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:33.625 00:23:33.625 real 0m16.272s 00:23:33.625 user 0m55.819s 00:23:33.625 sys 0m7.315s 00:23:33.625 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.625 18:15:33 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.625 ************************************ 00:23:33.625 END TEST nvmf_fio_host 00:23:33.625 ************************************ 00:23:33.625 18:15:33 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:33.625 18:15:33 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:23:33.625 18:15:33 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:33.625 18:15:33 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.625 18:15:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:33.625 ************************************ 00:23:33.625 START TEST nvmf_failover 00:23:33.625 ************************************ 00:23:33.625 18:15:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:23:33.884 * Looking for test storage... 00:23:33.884 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.884 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.885 18:15:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:42.008 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:42.008 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:42.008 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:42.008 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.008 18:15:41 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.008 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:42.008 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:42.008 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:42.008 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:42.008 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:42.008 altname enp217s0f0np0 00:23:42.008 altname ens818f0np0 00:23:42.008 inet 192.168.100.8/24 scope global mlx_0_0 00:23:42.008 valid_lft forever preferred_lft forever 00:23:42.008 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:42.009 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:42.009 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:42.009 altname enp217s0f1np1 00:23:42.009 altname ens818f1np1 00:23:42.009 inet 192.168.100.9/24 scope global mlx_0_1 00:23:42.009 valid_lft forever preferred_lft forever 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:42.009 192.168.100.9' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:42.009 192.168.100.9' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:42.009 192.168.100.9' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1751680 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1751680 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1751680 ']' 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.009 18:15:42 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.009 [2024-07-15 18:15:42.203847] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:23:42.009 [2024-07-15 18:15:42.203905] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.009 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.009 [2024-07-15 18:15:42.288489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.009 [2024-07-15 18:15:42.361502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.009 [2024-07-15 18:15:42.361542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.009 [2024-07-15 18:15:42.361552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.009 [2024-07-15 18:15:42.361560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.009 [2024-07-15 18:15:42.361566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.009 [2024-07-15 18:15:42.361667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.009 [2024-07-15 18:15:42.361749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.009 [2024-07-15 18:15:42.361751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.945 18:15:43 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:42.945 [2024-07-15 18:15:43.231425] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x99b500/0x99f9f0) succeed. 00:23:42.945 [2024-07-15 18:15:43.240509] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x99caa0/0x9e1080) succeed. 00:23:43.204 18:15:43 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:43.204 Malloc0 00:23:43.204 18:15:43 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.462 18:15:43 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.721 18:15:43 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:43.721 [2024-07-15 18:15:44.075565] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:43.721 18:15:44 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:43.980 [2024-07-15 18:15:44.247918] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:23:43.980 18:15:44 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:44.239 [2024-07-15 18:15:44.420475] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1752032 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1752032 /var/tmp/bdevperf.sock 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1752032 ']' 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.239 18:15:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:45.174 18:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.174 18:15:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:45.174 18:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.174 NVMe0n1 00:23:45.433 18:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:45.433 00:23:45.433 18:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1752300 00:23:45.433 18:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.433 18:15:45 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:46.854 18:15:46 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:46.854 18:15:47 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:50.141 18:15:50 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:50.141 00:23:50.141 18:15:50 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:23:50.141 18:15:50 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:53.476 18:15:53 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:53.476 [2024-07-15 18:15:53.626471] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:53.476 18:15:53 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:54.412 18:15:54 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:23:54.671 18:15:54 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 1752300 00:24:01.243 0 00:24:01.243 18:16:00 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 1752032 00:24:01.243 18:16:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1752032 ']' 00:24:01.243 18:16:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1752032 00:24:01.243 18:16:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:01.243 18:16:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.243 18:16:00 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1752032 00:24:01.243 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:01.243 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:01.243 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1752032' 00:24:01.243 killing process with pid 1752032 00:24:01.243 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1752032 00:24:01.243 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1752032 00:24:01.243 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:01.243 [2024-07-15 18:15:44.493643] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:24:01.243 [2024-07-15 18:15:44.493700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1752032 ] 00:24:01.243 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.243 [2024-07-15 18:15:44.577603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.243 [2024-07-15 18:15:44.649382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.243 Running I/O for 15 seconds... 00:24:01.243 [2024-07-15 18:15:47.995518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.995979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.995988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:24:01.243 [2024-07-15 18:15:47.996154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.243 [2024-07-15 18:15:47.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.244 [2024-07-15 18:15:47.996923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:24:01.244 [2024-07-15 18:15:47.996932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.996942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.996951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.996962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.996971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.996982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.996991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:29584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182b00 00:24:01.245 [2024-07-15 18:15:47.997371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.245 [2024-07-15 18:15:47.997723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.245 [2024-07-15 18:15:47.997733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.997988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.997996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.998007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.998018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.998029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.998038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.998049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.998057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.998068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.998077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:47.998089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:47.998099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:48.000026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.246 [2024-07-15 18:15:48.000039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.246 [2024-07-15 18:15:48.000048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29992 len:8 PRP1 0x0 PRP2 0x0 00:24:01.246 [2024-07-15 18:15:48.000057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:48.000098] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:01.246 [2024-07-15 18:15:48.000110] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:01.246 [2024-07-15 18:15:48.000123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.246 [2024-07-15 18:15:48.002848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.246 [2024-07-15 18:15:48.017351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.246 [2024-07-15 18:15:48.064082] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.246 [2024-07-15 18:15:51.454928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182b00 00:24:01.246 [2024-07-15 18:15:51.454971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.454991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:24:01.246 [2024-07-15 18:15:51.455002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:24:01.246 [2024-07-15 18:15:51.455027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182b00 00:24:01.246 [2024-07-15 18:15:51.455048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182b00 00:24:01.246 [2024-07-15 18:15:51.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.246 [2024-07-15 18:15:51.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.246 [2024-07-15 18:15:51.455358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:24:01.247 [2024-07-15 18:15:51.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.455987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.455997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.456006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.247 [2024-07-15 18:15:51.456023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.247 [2024-07-15 18:15:51.456033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182b00 00:24:01.248 [2024-07-15 18:15:51.456699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.248 [2024-07-15 18:15:51.456840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.248 [2024-07-15 18:15:51.456851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.456860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.456880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.456900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.456920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.456940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.456959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.456980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.456990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:24:01.249 [2024-07-15 18:15:51.457347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.457537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:51.457546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.459371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.249 [2024-07-15 18:15:51.459385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.249 [2024-07-15 18:15:51.459393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128632 len:8 PRP1 0x0 PRP2 0x0 00:24:01.249 [2024-07-15 18:15:51.459403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:51.459444] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:01.249 [2024-07-15 18:15:51.459455] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:01.249 [2024-07-15 18:15:51.459466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.249 [2024-07-15 18:15:51.462187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.249 [2024-07-15 18:15:51.476885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.249 [2024-07-15 18:15:51.523130] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.249 [2024-07-15 18:15:55.826241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.249 [2024-07-15 18:15:55.826283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.249 [2024-07-15 18:15:55.826301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.826926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:24:01.250 [2024-07-15 18:15:55.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.826995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.250 [2024-07-15 18:15:55.827124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.250 [2024-07-15 18:15:55.827136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.251 [2024-07-15 18:15:55.827798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:24:01.251 [2024-07-15 18:15:55.827837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.251 [2024-07-15 18:15:55.827849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.827859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.827878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.827898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.827917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.827938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.827957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.827976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.827987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.827995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.252 [2024-07-15 18:15:55.828537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:24:01.252 [2024-07-15 18:15:55.828639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.252 [2024-07-15 18:15:55.828649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:24:01.253 [2024-07-15 18:15:55.828658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182b00 00:24:01.253 [2024-07-15 18:15:55.828679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182b00 00:24:01.253 [2024-07-15 18:15:55.828699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.828829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.253 [2024-07-15 18:15:55.828838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:685e2000 sqhd:52b0 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.830757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:01.253 [2024-07-15 18:15:55.830772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:01.253 [2024-07-15 18:15:55.830781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106288 len:8 PRP1 0x0 PRP2 0x0 00:24:01.253 [2024-07-15 18:15:55.830791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.253 [2024-07-15 18:15:55.830831] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:01.253 [2024-07-15 18:15:55.830843] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:01.253 [2024-07-15 18:15:55.830854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:01.253 [2024-07-15 18:15:55.833556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:01.253 [2024-07-15 18:15:55.847456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:01.253 [2024-07-15 18:15:55.895671] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:01.253 00:24:01.253 Latency(us) 00:24:01.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.253 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:01.253 Verification LBA range: start 0x0 length 0x4000 00:24:01.253 NVMe0n1 : 15.01 14566.90 56.90 335.67 0.00 8566.56 337.51 1020054.73 00:24:01.253 =================================================================================================================== 00:24:01.253 Total : 14566.90 56.90 335.67 0.00 8566.56 337.51 1020054.73 00:24:01.253 Received shutdown signal, test time was about 15.000000 seconds 00:24:01.253 00:24:01.253 Latency(us) 00:24:01.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.253 =================================================================================================================== 00:24:01.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1754889 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1754889 /var/tmp/bdevperf.sock 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1754889 ']' 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.253 18:16:01 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.818 18:16:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.818 18:16:02 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:01.818 18:16:02 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:02.076 [2024-07-15 18:16:02.240433] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:02.076 18:16:02 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:02.076 [2024-07-15 18:16:02.421034] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:02.076 18:16:02 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.334 NVMe0n1 00:24:02.334 18:16:02 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.593 00:24:02.593 18:16:02 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:02.866 00:24:02.866 18:16:03 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:02.866 18:16:03 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:03.131 18:16:03 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:03.389 18:16:03 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:06.677 18:16:06 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:06.677 18:16:06 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:06.677 18:16:06 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1755774 00:24:06.677 18:16:06 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.677 18:16:06 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 1755774 00:24:07.612 0 00:24:07.612 18:16:07 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.612 [2024-07-15 18:16:01.277514] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:24:07.612 [2024-07-15 18:16:01.277572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754889 ] 00:24:07.612 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.612 [2024-07-15 18:16:01.363385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.612 [2024-07-15 18:16:01.429002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.612 [2024-07-15 18:16:03.550975] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:07.612 [2024-07-15 18:16:03.551580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.612 [2024-07-15 18:16:03.551614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.612 [2024-07-15 18:16:03.574442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.612 [2024-07-15 18:16:03.590464] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:07.612 Running I/O for 1 seconds... 00:24:07.612 00:24:07.612 Latency(us) 00:24:07.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.612 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:07.612 Verification LBA range: start 0x0 length 0x4000 00:24:07.612 NVMe0n1 : 1.01 18206.38 71.12 0.00 0.00 6992.35 2516.58 17196.65 00:24:07.612 =================================================================================================================== 00:24:07.612 Total : 18206.38 71.12 0.00 0.00 6992.35 2516.58 17196.65 00:24:07.612 18:16:07 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.612 18:16:07 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:07.871 18:16:08 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:07.871 18:16:08 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:07.871 18:16:08 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:08.130 18:16:08 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:08.388 18:16:08 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 1754889 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1754889 ']' 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1754889 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1754889 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1754889' 00:24:11.673 killing process with pid 1754889 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1754889 00:24:11.673 18:16:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1754889 00:24:11.673 18:16:12 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:11.673 18:16:12 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:11.932 rmmod nvme_rdma 00:24:11.932 rmmod nvme_fabrics 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1751680 ']' 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1751680 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1751680 ']' 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1751680 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.932 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1751680 00:24:12.191 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.191 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.191 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1751680' 00:24:12.191 killing process with pid 1751680 00:24:12.191 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1751680 00:24:12.191 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1751680 00:24:12.450 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.450 18:16:12 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:12.450 00:24:12.450 real 0m38.616s 00:24:12.450 user 2m3.926s 00:24:12.450 sys 0m8.581s 00:24:12.450 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.450 18:16:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:12.450 ************************************ 00:24:12.450 END TEST nvmf_failover 00:24:12.450 ************************************ 00:24:12.450 18:16:12 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:12.450 18:16:12 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:12.450 18:16:12 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.450 18:16:12 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.450 18:16:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:12.450 ************************************ 00:24:12.450 START TEST nvmf_host_discovery 00:24:12.450 ************************************ 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:12.450 * Looking for test storage... 00:24:12.450 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:12.450 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:24:12.450 00:24:12.450 real 0m0.128s 00:24:12.450 user 0m0.053s 00:24:12.450 sys 0m0.084s 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.450 18:16:12 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.450 ************************************ 00:24:12.450 END TEST nvmf_host_discovery 00:24:12.450 ************************************ 00:24:12.710 18:16:12 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:12.710 18:16:12 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:12.710 18:16:12 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.710 18:16:12 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.710 18:16:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:12.710 ************************************ 00:24:12.710 START TEST nvmf_host_multipath_status 00:24:12.710 ************************************ 00:24:12.710 18:16:12 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:12.710 * Looking for test storage... 00:24:12.710 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.710 18:16:13 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:20.897 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:20.897 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:20.897 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:20.897 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:20.897 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:20.898 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.898 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:20.898 altname enp217s0f0np0 00:24:20.898 altname ens818f0np0 00:24:20.898 inet 192.168.100.8/24 scope global mlx_0_0 00:24:20.898 valid_lft forever preferred_lft forever 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:20.898 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:20.898 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:20.898 altname enp217s0f1np1 00:24:20.898 altname ens818f1np1 00:24:20.898 inet 192.168.100.9/24 scope global mlx_0_1 00:24:20.898 valid_lft forever preferred_lft forever 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:20.898 192.168.100.9' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:20.898 192.168.100.9' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:20.898 192.168.100.9' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1760793 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1760793 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1760793 ']' 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.898 18:16:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:20.898 [2024-07-15 18:16:20.658666] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:24:20.898 [2024-07-15 18:16:20.658715] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.898 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.898 [2024-07-15 18:16:20.740805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:20.898 [2024-07-15 18:16:20.813882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.898 [2024-07-15 18:16:20.813919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.898 [2024-07-15 18:16:20.813928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.898 [2024-07-15 18:16:20.813936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.898 [2024-07-15 18:16:20.813943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.898 [2024-07-15 18:16:20.814039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.898 [2024-07-15 18:16:20.814042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1760793 00:24:21.158 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:21.416 [2024-07-15 18:16:21.694368] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2318640/0x231cb30) succeed. 00:24:21.416 [2024-07-15 18:16:21.703307] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2319af0/0x235e1c0) succeed. 00:24:21.416 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:21.675 Malloc0 00:24:21.675 18:16:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:21.935 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.935 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:22.195 [2024-07-15 18:16:22.477679] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:22.195 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:22.454 [2024-07-15 18:16:22.637911] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1761086 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1761086 /var/tmp/bdevperf.sock 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1761086 ']' 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.454 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.455 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.455 18:16:22 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:23.403 18:16:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.403 18:16:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:23.403 18:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:23.403 18:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:23.662 Nvme0n1 00:24:23.662 18:16:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:23.920 Nvme0n1 00:24:23.920 18:16:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:23.920 18:16:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:25.825 18:16:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:25.825 18:16:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:24:26.084 18:16:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:26.343 18:16:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:27.278 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:27.278 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.278 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.278 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.537 18:16:27 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.797 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.797 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.797 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.797 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.055 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.313 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.313 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:28.313 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:28.571 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:28.829 18:16:28 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:29.763 18:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:29.763 18:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:29.763 18:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.763 18:16:29 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.021 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.280 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.280 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.280 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.280 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.538 18:16:30 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.796 18:16:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.797 18:16:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:30.797 18:16:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:31.055 18:16:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:24:31.055 18:16:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.429 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.688 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.688 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.688 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.688 18:16:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.946 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.204 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.204 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:33.204 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:33.463 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:24:33.463 18:16:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:34.454 18:16:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:34.454 18:16:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:34.454 18:16:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.454 18:16:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.712 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.712 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:34.712 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.712 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.971 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.229 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.229 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.229 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.229 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.488 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.488 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:35.488 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.488 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.488 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:35.747 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:35.747 18:16:35 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:24:35.747 18:16:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:24:36.005 18:16:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:36.940 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:36.940 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:36.940 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.940 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.199 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.199 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:37.199 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.199 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.457 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.715 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.715 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:37.715 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.715 18:16:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:37.973 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:24:38.230 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:38.488 18:16:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:39.421 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:39.421 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:39.421 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.421 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.680 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.680 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:39.680 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.680 18:16:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.680 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.680 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.680 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.680 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.938 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.938 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.938 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.938 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.194 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.453 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.453 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:40.712 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:40.712 18:16:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:24:40.712 18:16:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:40.971 18:16:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:41.907 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:41.907 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.907 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.907 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:42.166 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.166 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:42.166 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:42.166 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.428 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.686 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.686 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.686 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.686 18:16:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:42.965 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:43.227 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:24:43.486 18:16:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:44.418 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:44.418 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:44.418 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.418 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:44.676 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:44.676 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:44.676 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.676 18:16:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:44.676 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.676 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:44.676 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.676 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:44.934 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:44.934 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:44.934 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:44.934 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:45.193 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:45.452 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:45.452 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:45.452 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:45.710 18:16:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:24:45.710 18:16:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.142 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:47.402 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.402 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:47.402 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.402 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:47.660 18:16:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:47.917 18:16:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:47.917 18:16:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:47.917 18:16:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:24:48.175 18:16:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:24:48.175 18:16:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.550 18:16:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.808 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.808 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.808 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.808 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.066 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1761086 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1761086 ']' 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1761086 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1761086 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1761086' 00:24:50.325 killing process with pid 1761086 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1761086 00:24:50.325 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1761086 00:24:50.587 Connection closed with partial response: 00:24:50.588 00:24:50.588 00:24:50.588 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1761086 00:24:50.588 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.588 [2024-07-15 18:16:22.701284] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:24:50.588 [2024-07-15 18:16:22.701343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761086 ] 00:24:50.588 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.588 [2024-07-15 18:16:22.781699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.588 [2024-07-15 18:16:22.852168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.588 Running I/O for 90 seconds... 00:24:50.588 [2024-07-15 18:16:36.050766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.050987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.050999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184500 00:24:50.588 [2024-07-15 18:16:36.051517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.588 [2024-07-15 18:16:36.051530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.051981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.051993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184500 00:24:50.589 [2024-07-15 18:16:36.052167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.589 [2024-07-15 18:16:36.052189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.589 [2024-07-15 18:16:36.052209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.589 [2024-07-15 18:16:36.052221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.590 [2024-07-15 18:16:36.052795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:50.590 [2024-07-15 18:16:36.052973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184500 00:24:50.590 [2024-07-15 18:16:36.052982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.052995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:36.053786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.591 [2024-07-15 18:16:36.053815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:36.053834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.591 [2024-07-15 18:16:36.053843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:48.493595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:48.493632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:48.493663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:48.493673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:48.493686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.591 [2024-07-15 18:16:48.493695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:48.493708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:48.493717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:48.494223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.591 [2024-07-15 18:16:48.494236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:50.591 [2024-07-15 18:16:48.494249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184500 00:24:50.591 [2024-07-15 18:16:48.494259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.592 [2024-07-15 18:16:48.494841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184500 00:24:50.592 [2024-07-15 18:16:48.494903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:50.592 [2024-07-15 18:16:48.494967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.494979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.494991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:84576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184500 00:24:50.593 [2024-07-15 18:16:48.495534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:50.593 [2024-07-15 18:16:48.495546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:50.593 [2024-07-15 18:16:48.495556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:50.593 Received shutdown signal, test time was about 26.332226 seconds 00:24:50.593 00:24:50.593 Latency(us) 00:24:50.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.593 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.593 Verification LBA range: start 0x0 length 0x4000 00:24:50.593 Nvme0n1 : 26.33 16259.71 63.51 0.00 0.00 7853.53 1146.88 3019898.88 00:24:50.593 =================================================================================================================== 00:24:50.593 Total : 16259.71 63.51 0.00 0.00 7853.53 1146.88 3019898.88 00:24:50.593 18:16:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:50.852 rmmod nvme_rdma 00:24:50.852 rmmod nvme_fabrics 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1760793 ']' 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1760793 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1760793 ']' 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1760793 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1760793 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1760793' 00:24:50.852 killing process with pid 1760793 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1760793 00:24:50.852 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1760793 00:24:51.110 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.111 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:51.111 00:24:51.111 real 0m38.512s 00:24:51.111 user 1m45.541s 00:24:51.111 sys 0m10.074s 00:24:51.111 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.111 18:16:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:51.111 ************************************ 00:24:51.111 END TEST nvmf_host_multipath_status 00:24:51.111 ************************************ 00:24:51.111 18:16:51 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:51.111 18:16:51 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:51.111 18:16:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:51.111 18:16:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.111 18:16:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.111 ************************************ 00:24:51.111 START TEST nvmf_discovery_remove_ifc 00:24:51.111 ************************************ 00:24:51.111 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:24:51.369 * Looking for test storage... 00:24:51.369 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.369 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:51.370 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:24:51.370 00:24:51.370 real 0m0.144s 00:24:51.370 user 0m0.064s 00:24:51.370 sys 0m0.090s 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.370 18:16:51 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.370 ************************************ 00:24:51.370 END TEST nvmf_discovery_remove_ifc 00:24:51.370 ************************************ 00:24:51.370 18:16:51 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:51.370 18:16:51 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:51.370 18:16:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:51.370 18:16:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.370 18:16:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.370 ************************************ 00:24:51.370 START TEST nvmf_identify_kernel_target 00:24:51.370 ************************************ 00:24:51.370 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:24:51.629 * Looking for test storage... 00:24:51.629 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.629 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.630 18:16:51 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:59.750 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:59.750 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:59.750 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:59.751 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:59.751 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:59.751 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:59.751 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:59.751 altname enp217s0f0np0 00:24:59.751 altname ens818f0np0 00:24:59.751 inet 192.168.100.8/24 scope global mlx_0_0 00:24:59.751 valid_lft forever preferred_lft forever 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:59.751 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:59.751 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:59.751 altname enp217s0f1np1 00:24:59.751 altname ens818f1np1 00:24:59.751 inet 192.168.100.9/24 scope global mlx_0_1 00:24:59.751 valid_lft forever preferred_lft forever 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:59.751 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:59.752 192.168.100.9' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:59.752 192.168.100.9' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:59.752 192.168.100.9' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:59.752 18:16:59 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:03.041 Waiting for block devices as requested 00:25:03.300 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:03.300 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:03.300 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:03.300 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:03.559 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:03.559 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:03.559 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:03.818 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.818 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:03.818 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:04.077 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:04.077 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:04.077 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:04.336 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:04.336 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:04.336 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:04.595 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:04.595 No valid GPT data, bailing 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:04.595 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:04.853 18:17:04 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:04.853 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:25:04.853 00:25:04.853 Discovery Log Number of Records 2, Generation counter 2 00:25:04.853 =====Discovery Log Entry 0====== 00:25:04.853 trtype: rdma 00:25:04.853 adrfam: ipv4 00:25:04.853 subtype: current discovery subsystem 00:25:04.853 treq: not specified, sq flow control disable supported 00:25:04.853 portid: 1 00:25:04.853 trsvcid: 4420 00:25:04.853 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:04.853 traddr: 192.168.100.8 00:25:04.853 eflags: none 00:25:04.853 rdma_prtype: not specified 00:25:04.853 rdma_qptype: connected 00:25:04.853 rdma_cms: rdma-cm 00:25:04.853 rdma_pkey: 0x0000 00:25:04.853 =====Discovery Log Entry 1====== 00:25:04.853 trtype: rdma 00:25:04.853 adrfam: ipv4 00:25:04.853 subtype: nvme subsystem 00:25:04.853 treq: not specified, sq flow control disable supported 00:25:04.853 portid: 1 00:25:04.853 trsvcid: 4420 00:25:04.853 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:04.853 traddr: 192.168.100.8 00:25:04.853 eflags: none 00:25:04.853 rdma_prtype: not specified 00:25:04.853 rdma_qptype: connected 00:25:04.853 rdma_cms: rdma-cm 00:25:04.853 rdma_pkey: 0x0000 00:25:04.853 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:25:04.853 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:04.853 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.112 ===================================================== 00:25:05.112 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:05.112 ===================================================== 00:25:05.112 Controller Capabilities/Features 00:25:05.112 ================================ 00:25:05.112 Vendor ID: 0000 00:25:05.112 Subsystem Vendor ID: 0000 00:25:05.112 Serial Number: bb22b44399af2046fa94 00:25:05.112 Model Number: Linux 00:25:05.112 Firmware Version: 6.7.0-68 00:25:05.112 Recommended Arb Burst: 0 00:25:05.112 IEEE OUI Identifier: 00 00 00 00:25:05.112 Multi-path I/O 00:25:05.112 May have multiple subsystem ports: No 00:25:05.112 May have multiple controllers: No 00:25:05.112 Associated with SR-IOV VF: No 00:25:05.112 Max Data Transfer Size: Unlimited 00:25:05.112 Max Number of Namespaces: 0 00:25:05.112 Max Number of I/O Queues: 1024 00:25:05.112 NVMe Specification Version (VS): 1.3 00:25:05.112 NVMe Specification Version (Identify): 1.3 00:25:05.112 Maximum Queue Entries: 128 00:25:05.112 Contiguous Queues Required: No 00:25:05.112 Arbitration Mechanisms Supported 00:25:05.112 Weighted Round Robin: Not Supported 00:25:05.112 Vendor Specific: Not Supported 00:25:05.112 Reset Timeout: 7500 ms 00:25:05.112 Doorbell Stride: 4 bytes 00:25:05.112 NVM Subsystem Reset: Not Supported 00:25:05.112 Command Sets Supported 00:25:05.112 NVM Command Set: Supported 00:25:05.112 Boot Partition: Not Supported 00:25:05.112 Memory Page Size Minimum: 4096 bytes 00:25:05.112 Memory Page Size Maximum: 4096 bytes 00:25:05.112 Persistent Memory Region: Not Supported 00:25:05.112 Optional Asynchronous Events Supported 00:25:05.112 Namespace Attribute Notices: Not Supported 00:25:05.112 Firmware Activation Notices: Not Supported 00:25:05.112 ANA Change Notices: Not Supported 00:25:05.112 PLE Aggregate Log Change Notices: Not Supported 00:25:05.112 LBA Status Info Alert Notices: Not Supported 00:25:05.112 EGE Aggregate Log Change Notices: Not Supported 00:25:05.112 Normal NVM Subsystem Shutdown event: Not Supported 00:25:05.112 Zone Descriptor Change Notices: Not Supported 00:25:05.112 Discovery Log Change Notices: Supported 00:25:05.112 Controller Attributes 00:25:05.112 128-bit Host Identifier: Not Supported 00:25:05.112 Non-Operational Permissive Mode: Not Supported 00:25:05.112 NVM Sets: Not Supported 00:25:05.112 Read Recovery Levels: Not Supported 00:25:05.112 Endurance Groups: Not Supported 00:25:05.112 Predictable Latency Mode: Not Supported 00:25:05.112 Traffic Based Keep ALive: Not Supported 00:25:05.112 Namespace Granularity: Not Supported 00:25:05.112 SQ Associations: Not Supported 00:25:05.112 UUID List: Not Supported 00:25:05.112 Multi-Domain Subsystem: Not Supported 00:25:05.112 Fixed Capacity Management: Not Supported 00:25:05.112 Variable Capacity Management: Not Supported 00:25:05.112 Delete Endurance Group: Not Supported 00:25:05.112 Delete NVM Set: Not Supported 00:25:05.112 Extended LBA Formats Supported: Not Supported 00:25:05.112 Flexible Data Placement Supported: Not Supported 00:25:05.112 00:25:05.112 Controller Memory Buffer Support 00:25:05.112 ================================ 00:25:05.112 Supported: No 00:25:05.112 00:25:05.112 Persistent Memory Region Support 00:25:05.112 ================================ 00:25:05.112 Supported: No 00:25:05.112 00:25:05.112 Admin Command Set Attributes 00:25:05.112 ============================ 00:25:05.112 Security Send/Receive: Not Supported 00:25:05.112 Format NVM: Not Supported 00:25:05.112 Firmware Activate/Download: Not Supported 00:25:05.112 Namespace Management: Not Supported 00:25:05.112 Device Self-Test: Not Supported 00:25:05.112 Directives: Not Supported 00:25:05.112 NVMe-MI: Not Supported 00:25:05.112 Virtualization Management: Not Supported 00:25:05.112 Doorbell Buffer Config: Not Supported 00:25:05.112 Get LBA Status Capability: Not Supported 00:25:05.112 Command & Feature Lockdown Capability: Not Supported 00:25:05.112 Abort Command Limit: 1 00:25:05.112 Async Event Request Limit: 1 00:25:05.112 Number of Firmware Slots: N/A 00:25:05.112 Firmware Slot 1 Read-Only: N/A 00:25:05.112 Firmware Activation Without Reset: N/A 00:25:05.112 Multiple Update Detection Support: N/A 00:25:05.112 Firmware Update Granularity: No Information Provided 00:25:05.112 Per-Namespace SMART Log: No 00:25:05.112 Asymmetric Namespace Access Log Page: Not Supported 00:25:05.112 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:05.112 Command Effects Log Page: Not Supported 00:25:05.112 Get Log Page Extended Data: Supported 00:25:05.112 Telemetry Log Pages: Not Supported 00:25:05.112 Persistent Event Log Pages: Not Supported 00:25:05.112 Supported Log Pages Log Page: May Support 00:25:05.112 Commands Supported & Effects Log Page: Not Supported 00:25:05.112 Feature Identifiers & Effects Log Page:May Support 00:25:05.112 NVMe-MI Commands & Effects Log Page: May Support 00:25:05.112 Data Area 4 for Telemetry Log: Not Supported 00:25:05.112 Error Log Page Entries Supported: 1 00:25:05.112 Keep Alive: Not Supported 00:25:05.112 00:25:05.112 NVM Command Set Attributes 00:25:05.112 ========================== 00:25:05.112 Submission Queue Entry Size 00:25:05.112 Max: 1 00:25:05.112 Min: 1 00:25:05.112 Completion Queue Entry Size 00:25:05.112 Max: 1 00:25:05.112 Min: 1 00:25:05.112 Number of Namespaces: 0 00:25:05.112 Compare Command: Not Supported 00:25:05.112 Write Uncorrectable Command: Not Supported 00:25:05.112 Dataset Management Command: Not Supported 00:25:05.112 Write Zeroes Command: Not Supported 00:25:05.112 Set Features Save Field: Not Supported 00:25:05.112 Reservations: Not Supported 00:25:05.112 Timestamp: Not Supported 00:25:05.112 Copy: Not Supported 00:25:05.112 Volatile Write Cache: Not Present 00:25:05.112 Atomic Write Unit (Normal): 1 00:25:05.112 Atomic Write Unit (PFail): 1 00:25:05.112 Atomic Compare & Write Unit: 1 00:25:05.112 Fused Compare & Write: Not Supported 00:25:05.112 Scatter-Gather List 00:25:05.112 SGL Command Set: Supported 00:25:05.112 SGL Keyed: Supported 00:25:05.112 SGL Bit Bucket Descriptor: Not Supported 00:25:05.112 SGL Metadata Pointer: Not Supported 00:25:05.112 Oversized SGL: Not Supported 00:25:05.112 SGL Metadata Address: Not Supported 00:25:05.112 SGL Offset: Supported 00:25:05.112 Transport SGL Data Block: Not Supported 00:25:05.112 Replay Protected Memory Block: Not Supported 00:25:05.112 00:25:05.112 Firmware Slot Information 00:25:05.112 ========================= 00:25:05.112 Active slot: 0 00:25:05.112 00:25:05.112 00:25:05.112 Error Log 00:25:05.112 ========= 00:25:05.112 00:25:05.112 Active Namespaces 00:25:05.112 ================= 00:25:05.112 Discovery Log Page 00:25:05.112 ================== 00:25:05.112 Generation Counter: 2 00:25:05.112 Number of Records: 2 00:25:05.112 Record Format: 0 00:25:05.112 00:25:05.112 Discovery Log Entry 0 00:25:05.112 ---------------------- 00:25:05.112 Transport Type: 1 (RDMA) 00:25:05.112 Address Family: 1 (IPv4) 00:25:05.112 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:05.112 Entry Flags: 00:25:05.112 Duplicate Returned Information: 0 00:25:05.112 Explicit Persistent Connection Support for Discovery: 0 00:25:05.112 Transport Requirements: 00:25:05.112 Secure Channel: Not Specified 00:25:05.112 Port ID: 1 (0x0001) 00:25:05.112 Controller ID: 65535 (0xffff) 00:25:05.112 Admin Max SQ Size: 32 00:25:05.112 Transport Service Identifier: 4420 00:25:05.112 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:05.112 Transport Address: 192.168.100.8 00:25:05.112 Transport Specific Address Subtype - RDMA 00:25:05.112 RDMA QP Service Type: 1 (Reliable Connected) 00:25:05.112 RDMA Provider Type: 1 (No provider specified) 00:25:05.112 RDMA CM Service: 1 (RDMA_CM) 00:25:05.112 Discovery Log Entry 1 00:25:05.112 ---------------------- 00:25:05.112 Transport Type: 1 (RDMA) 00:25:05.112 Address Family: 1 (IPv4) 00:25:05.112 Subsystem Type: 2 (NVM Subsystem) 00:25:05.112 Entry Flags: 00:25:05.112 Duplicate Returned Information: 0 00:25:05.112 Explicit Persistent Connection Support for Discovery: 0 00:25:05.112 Transport Requirements: 00:25:05.112 Secure Channel: Not Specified 00:25:05.112 Port ID: 1 (0x0001) 00:25:05.112 Controller ID: 65535 (0xffff) 00:25:05.112 Admin Max SQ Size: 32 00:25:05.113 Transport Service Identifier: 4420 00:25:05.113 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:05.113 Transport Address: 192.168.100.8 00:25:05.113 Transport Specific Address Subtype - RDMA 00:25:05.113 RDMA QP Service Type: 1 (Reliable Connected) 00:25:05.113 RDMA Provider Type: 1 (No provider specified) 00:25:05.113 RDMA CM Service: 1 (RDMA_CM) 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:05.113 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.113 get_feature(0x01) failed 00:25:05.113 get_feature(0x02) failed 00:25:05.113 get_feature(0x04) failed 00:25:05.113 ===================================================== 00:25:05.113 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:25:05.113 ===================================================== 00:25:05.113 Controller Capabilities/Features 00:25:05.113 ================================ 00:25:05.113 Vendor ID: 0000 00:25:05.113 Subsystem Vendor ID: 0000 00:25:05.113 Serial Number: 3ede2eb4ae35e3b71af2 00:25:05.113 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:05.113 Firmware Version: 6.7.0-68 00:25:05.113 Recommended Arb Burst: 6 00:25:05.113 IEEE OUI Identifier: 00 00 00 00:25:05.113 Multi-path I/O 00:25:05.113 May have multiple subsystem ports: Yes 00:25:05.113 May have multiple controllers: Yes 00:25:05.113 Associated with SR-IOV VF: No 00:25:05.113 Max Data Transfer Size: 1048576 00:25:05.113 Max Number of Namespaces: 1024 00:25:05.113 Max Number of I/O Queues: 128 00:25:05.113 NVMe Specification Version (VS): 1.3 00:25:05.113 NVMe Specification Version (Identify): 1.3 00:25:05.113 Maximum Queue Entries: 128 00:25:05.113 Contiguous Queues Required: No 00:25:05.113 Arbitration Mechanisms Supported 00:25:05.113 Weighted Round Robin: Not Supported 00:25:05.113 Vendor Specific: Not Supported 00:25:05.113 Reset Timeout: 7500 ms 00:25:05.113 Doorbell Stride: 4 bytes 00:25:05.113 NVM Subsystem Reset: Not Supported 00:25:05.113 Command Sets Supported 00:25:05.113 NVM Command Set: Supported 00:25:05.113 Boot Partition: Not Supported 00:25:05.113 Memory Page Size Minimum: 4096 bytes 00:25:05.113 Memory Page Size Maximum: 4096 bytes 00:25:05.113 Persistent Memory Region: Not Supported 00:25:05.113 Optional Asynchronous Events Supported 00:25:05.113 Namespace Attribute Notices: Supported 00:25:05.113 Firmware Activation Notices: Not Supported 00:25:05.113 ANA Change Notices: Supported 00:25:05.113 PLE Aggregate Log Change Notices: Not Supported 00:25:05.113 LBA Status Info Alert Notices: Not Supported 00:25:05.113 EGE Aggregate Log Change Notices: Not Supported 00:25:05.113 Normal NVM Subsystem Shutdown event: Not Supported 00:25:05.113 Zone Descriptor Change Notices: Not Supported 00:25:05.113 Discovery Log Change Notices: Not Supported 00:25:05.113 Controller Attributes 00:25:05.113 128-bit Host Identifier: Supported 00:25:05.113 Non-Operational Permissive Mode: Not Supported 00:25:05.113 NVM Sets: Not Supported 00:25:05.113 Read Recovery Levels: Not Supported 00:25:05.113 Endurance Groups: Not Supported 00:25:05.113 Predictable Latency Mode: Not Supported 00:25:05.113 Traffic Based Keep ALive: Supported 00:25:05.113 Namespace Granularity: Not Supported 00:25:05.113 SQ Associations: Not Supported 00:25:05.113 UUID List: Not Supported 00:25:05.113 Multi-Domain Subsystem: Not Supported 00:25:05.113 Fixed Capacity Management: Not Supported 00:25:05.113 Variable Capacity Management: Not Supported 00:25:05.113 Delete Endurance Group: Not Supported 00:25:05.113 Delete NVM Set: Not Supported 00:25:05.113 Extended LBA Formats Supported: Not Supported 00:25:05.113 Flexible Data Placement Supported: Not Supported 00:25:05.113 00:25:05.113 Controller Memory Buffer Support 00:25:05.113 ================================ 00:25:05.113 Supported: No 00:25:05.113 00:25:05.113 Persistent Memory Region Support 00:25:05.113 ================================ 00:25:05.113 Supported: No 00:25:05.113 00:25:05.113 Admin Command Set Attributes 00:25:05.113 ============================ 00:25:05.113 Security Send/Receive: Not Supported 00:25:05.113 Format NVM: Not Supported 00:25:05.113 Firmware Activate/Download: Not Supported 00:25:05.113 Namespace Management: Not Supported 00:25:05.113 Device Self-Test: Not Supported 00:25:05.113 Directives: Not Supported 00:25:05.113 NVMe-MI: Not Supported 00:25:05.113 Virtualization Management: Not Supported 00:25:05.113 Doorbell Buffer Config: Not Supported 00:25:05.113 Get LBA Status Capability: Not Supported 00:25:05.113 Command & Feature Lockdown Capability: Not Supported 00:25:05.113 Abort Command Limit: 4 00:25:05.113 Async Event Request Limit: 4 00:25:05.113 Number of Firmware Slots: N/A 00:25:05.113 Firmware Slot 1 Read-Only: N/A 00:25:05.113 Firmware Activation Without Reset: N/A 00:25:05.113 Multiple Update Detection Support: N/A 00:25:05.113 Firmware Update Granularity: No Information Provided 00:25:05.113 Per-Namespace SMART Log: Yes 00:25:05.113 Asymmetric Namespace Access Log Page: Supported 00:25:05.113 ANA Transition Time : 10 sec 00:25:05.113 00:25:05.113 Asymmetric Namespace Access Capabilities 00:25:05.113 ANA Optimized State : Supported 00:25:05.113 ANA Non-Optimized State : Supported 00:25:05.113 ANA Inaccessible State : Supported 00:25:05.113 ANA Persistent Loss State : Supported 00:25:05.113 ANA Change State : Supported 00:25:05.113 ANAGRPID is not changed : No 00:25:05.113 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:05.113 00:25:05.113 ANA Group Identifier Maximum : 128 00:25:05.113 Number of ANA Group Identifiers : 128 00:25:05.113 Max Number of Allowed Namespaces : 1024 00:25:05.113 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:05.113 Command Effects Log Page: Supported 00:25:05.113 Get Log Page Extended Data: Supported 00:25:05.113 Telemetry Log Pages: Not Supported 00:25:05.113 Persistent Event Log Pages: Not Supported 00:25:05.113 Supported Log Pages Log Page: May Support 00:25:05.113 Commands Supported & Effects Log Page: Not Supported 00:25:05.113 Feature Identifiers & Effects Log Page:May Support 00:25:05.113 NVMe-MI Commands & Effects Log Page: May Support 00:25:05.113 Data Area 4 for Telemetry Log: Not Supported 00:25:05.113 Error Log Page Entries Supported: 128 00:25:05.113 Keep Alive: Supported 00:25:05.113 Keep Alive Granularity: 1000 ms 00:25:05.113 00:25:05.113 NVM Command Set Attributes 00:25:05.113 ========================== 00:25:05.113 Submission Queue Entry Size 00:25:05.113 Max: 64 00:25:05.113 Min: 64 00:25:05.113 Completion Queue Entry Size 00:25:05.113 Max: 16 00:25:05.113 Min: 16 00:25:05.113 Number of Namespaces: 1024 00:25:05.113 Compare Command: Not Supported 00:25:05.113 Write Uncorrectable Command: Not Supported 00:25:05.113 Dataset Management Command: Supported 00:25:05.113 Write Zeroes Command: Supported 00:25:05.113 Set Features Save Field: Not Supported 00:25:05.113 Reservations: Not Supported 00:25:05.113 Timestamp: Not Supported 00:25:05.113 Copy: Not Supported 00:25:05.113 Volatile Write Cache: Present 00:25:05.113 Atomic Write Unit (Normal): 1 00:25:05.113 Atomic Write Unit (PFail): 1 00:25:05.113 Atomic Compare & Write Unit: 1 00:25:05.113 Fused Compare & Write: Not Supported 00:25:05.113 Scatter-Gather List 00:25:05.113 SGL Command Set: Supported 00:25:05.113 SGL Keyed: Supported 00:25:05.113 SGL Bit Bucket Descriptor: Not Supported 00:25:05.113 SGL Metadata Pointer: Not Supported 00:25:05.113 Oversized SGL: Not Supported 00:25:05.113 SGL Metadata Address: Not Supported 00:25:05.113 SGL Offset: Supported 00:25:05.113 Transport SGL Data Block: Not Supported 00:25:05.113 Replay Protected Memory Block: Not Supported 00:25:05.113 00:25:05.113 Firmware Slot Information 00:25:05.113 ========================= 00:25:05.113 Active slot: 0 00:25:05.113 00:25:05.113 Asymmetric Namespace Access 00:25:05.113 =========================== 00:25:05.113 Change Count : 0 00:25:05.113 Number of ANA Group Descriptors : 1 00:25:05.113 ANA Group Descriptor : 0 00:25:05.113 ANA Group ID : 1 00:25:05.113 Number of NSID Values : 1 00:25:05.113 Change Count : 0 00:25:05.113 ANA State : 1 00:25:05.113 Namespace Identifier : 1 00:25:05.113 00:25:05.113 Commands Supported and Effects 00:25:05.113 ============================== 00:25:05.113 Admin Commands 00:25:05.113 -------------- 00:25:05.113 Get Log Page (02h): Supported 00:25:05.113 Identify (06h): Supported 00:25:05.113 Abort (08h): Supported 00:25:05.113 Set Features (09h): Supported 00:25:05.113 Get Features (0Ah): Supported 00:25:05.113 Asynchronous Event Request (0Ch): Supported 00:25:05.113 Keep Alive (18h): Supported 00:25:05.113 I/O Commands 00:25:05.113 ------------ 00:25:05.113 Flush (00h): Supported 00:25:05.113 Write (01h): Supported LBA-Change 00:25:05.113 Read (02h): Supported 00:25:05.113 Write Zeroes (08h): Supported LBA-Change 00:25:05.113 Dataset Management (09h): Supported 00:25:05.113 00:25:05.113 Error Log 00:25:05.113 ========= 00:25:05.113 Entry: 0 00:25:05.113 Error Count: 0x3 00:25:05.113 Submission Queue Id: 0x0 00:25:05.113 Command Id: 0x5 00:25:05.113 Phase Bit: 0 00:25:05.113 Status Code: 0x2 00:25:05.113 Status Code Type: 0x0 00:25:05.113 Do Not Retry: 1 00:25:05.113 Error Location: 0x28 00:25:05.113 LBA: 0x0 00:25:05.113 Namespace: 0x0 00:25:05.113 Vendor Log Page: 0x0 00:25:05.113 ----------- 00:25:05.113 Entry: 1 00:25:05.113 Error Count: 0x2 00:25:05.113 Submission Queue Id: 0x0 00:25:05.113 Command Id: 0x5 00:25:05.113 Phase Bit: 0 00:25:05.113 Status Code: 0x2 00:25:05.113 Status Code Type: 0x0 00:25:05.113 Do Not Retry: 1 00:25:05.113 Error Location: 0x28 00:25:05.113 LBA: 0x0 00:25:05.113 Namespace: 0x0 00:25:05.113 Vendor Log Page: 0x0 00:25:05.113 ----------- 00:25:05.113 Entry: 2 00:25:05.113 Error Count: 0x1 00:25:05.113 Submission Queue Id: 0x0 00:25:05.113 Command Id: 0x0 00:25:05.113 Phase Bit: 0 00:25:05.113 Status Code: 0x2 00:25:05.113 Status Code Type: 0x0 00:25:05.113 Do Not Retry: 1 00:25:05.113 Error Location: 0x28 00:25:05.113 LBA: 0x0 00:25:05.113 Namespace: 0x0 00:25:05.113 Vendor Log Page: 0x0 00:25:05.113 00:25:05.113 Number of Queues 00:25:05.113 ================ 00:25:05.113 Number of I/O Submission Queues: 128 00:25:05.113 Number of I/O Completion Queues: 128 00:25:05.113 00:25:05.113 ZNS Specific Controller Data 00:25:05.113 ============================ 00:25:05.113 Zone Append Size Limit: 0 00:25:05.113 00:25:05.113 00:25:05.113 Active Namespaces 00:25:05.113 ================= 00:25:05.113 get_feature(0x05) failed 00:25:05.113 Namespace ID:1 00:25:05.113 Command Set Identifier: NVM (00h) 00:25:05.113 Deallocate: Supported 00:25:05.113 Deallocated/Unwritten Error: Not Supported 00:25:05.113 Deallocated Read Value: Unknown 00:25:05.113 Deallocate in Write Zeroes: Not Supported 00:25:05.113 Deallocated Guard Field: 0xFFFF 00:25:05.113 Flush: Supported 00:25:05.113 Reservation: Not Supported 00:25:05.113 Namespace Sharing Capabilities: Multiple Controllers 00:25:05.113 Size (in LBAs): 3907029168 (1863GiB) 00:25:05.113 Capacity (in LBAs): 3907029168 (1863GiB) 00:25:05.113 Utilization (in LBAs): 3907029168 (1863GiB) 00:25:05.113 UUID: 107b25aa-3324-4425-9c65-801b0a80627a 00:25:05.113 Thin Provisioning: Not Supported 00:25:05.113 Per-NS Atomic Units: Yes 00:25:05.113 Atomic Boundary Size (Normal): 0 00:25:05.113 Atomic Boundary Size (PFail): 0 00:25:05.113 Atomic Boundary Offset: 0 00:25:05.113 NGUID/EUI64 Never Reused: No 00:25:05.113 ANA group ID: 1 00:25:05.113 Namespace Write Protected: No 00:25:05.113 Number of LBA Formats: 1 00:25:05.113 Current LBA Format: LBA Format #00 00:25:05.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:05.113 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:05.113 rmmod nvme_rdma 00:25:05.113 rmmod nvme_fabrics 00:25:05.113 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:05.373 18:17:05 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:09.637 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:09.637 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:11.541 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:25:11.541 00:25:11.541 real 0m20.195s 00:25:11.541 user 0m5.439s 00:25:11.541 sys 0m12.065s 00:25:11.541 18:17:11 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.541 18:17:11 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.541 ************************************ 00:25:11.541 END TEST nvmf_identify_kernel_target 00:25:11.541 ************************************ 00:25:11.800 18:17:11 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:11.800 18:17:11 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:11.800 18:17:11 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.800 18:17:11 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.800 18:17:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:11.800 ************************************ 00:25:11.800 START TEST nvmf_auth_host 00:25:11.800 ************************************ 00:25:11.800 18:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:11.800 * Looking for test storage... 00:25:11.800 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:11.800 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.801 18:17:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:19.921 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:19.921 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.921 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:19.922 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:19.922 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:19.922 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:20.181 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:20.181 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:20.181 altname enp217s0f0np0 00:25:20.181 altname ens818f0np0 00:25:20.181 inet 192.168.100.8/24 scope global mlx_0_0 00:25:20.181 valid_lft forever preferred_lft forever 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:20.181 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:20.181 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:20.181 altname enp217s0f1np1 00:25:20.181 altname ens818f1np1 00:25:20.181 inet 192.168.100.9/24 scope global mlx_0_1 00:25:20.181 valid_lft forever preferred_lft forever 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:20.181 192.168.100.9' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:20.181 192.168.100.9' 00:25:20.181 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:20.182 192.168.100.9' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1777984 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1777984 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1777984 ']' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.182 18:17:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=adb09c9c007f9894696a67d8e89381f4 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xA2 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key adb09c9c007f9894696a67d8e89381f4 0 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 adb09c9c007f9894696a67d8e89381f4 0 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=adb09c9c007f9894696a67d8e89381f4 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xA2 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xA2 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.xA2 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=31557c315c94daa2670c53effa1e56762fc26d28b18f8acfac5ef8eff8939169 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mtq 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 31557c315c94daa2670c53effa1e56762fc26d28b18f8acfac5ef8eff8939169 3 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 31557c315c94daa2670c53effa1e56762fc26d28b18f8acfac5ef8eff8939169 3 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=31557c315c94daa2670c53effa1e56762fc26d28b18f8acfac5ef8eff8939169 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mtq 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mtq 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.mtq 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.120 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=45105b867823e34dba4c80c51d99d601edbc100c79c86c73 00:25:21.379 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oCo 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 45105b867823e34dba4c80c51d99d601edbc100c79c86c73 0 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 45105b867823e34dba4c80c51d99d601edbc100c79c86c73 0 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=45105b867823e34dba4c80c51d99d601edbc100c79c86c73 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oCo 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oCo 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oCo 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0c1fe9474490521fb4ff6960bce69f7363afcc956081f23d 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.L1I 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0c1fe9474490521fb4ff6960bce69f7363afcc956081f23d 2 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0c1fe9474490521fb4ff6960bce69f7363afcc956081f23d 2 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0c1fe9474490521fb4ff6960bce69f7363afcc956081f23d 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.L1I 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.L1I 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.L1I 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e8ab39a169e4e42209294ac516f077d 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Q22 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e8ab39a169e4e42209294ac516f077d 1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e8ab39a169e4e42209294ac516f077d 1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e8ab39a169e4e42209294ac516f077d 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Q22 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Q22 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Q22 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=029c7c6c9067c29a58c9cc2939f5f819 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pLS 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 029c7c6c9067c29a58c9cc2939f5f819 1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 029c7c6c9067c29a58c9cc2939f5f819 1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=029c7c6c9067c29a58c9cc2939f5f819 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pLS 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pLS 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.pLS 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c3a65b68d946db2a41b6ba55c6550fddcf3dd7104439fc09 00:25:21.380 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0jO 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c3a65b68d946db2a41b6ba55c6550fddcf3dd7104439fc09 2 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c3a65b68d946db2a41b6ba55c6550fddcf3dd7104439fc09 2 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c3a65b68d946db2a41b6ba55c6550fddcf3dd7104439fc09 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0jO 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0jO 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.0jO 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ab9c1fbf29c8076678518b2b4b40ba8 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xV3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ab9c1fbf29c8076678518b2b4b40ba8 0 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ab9c1fbf29c8076678518b2b4b40ba8 0 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ab9c1fbf29c8076678518b2b4b40ba8 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xV3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xV3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xV3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=75b7540772c9eeb068cb9711bf9b83b5572d34b0c4cd06108f6cc6d87e8e41ef 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NTx 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 75b7540772c9eeb068cb9711bf9b83b5572d34b0c4cd06108f6cc6d87e8e41ef 3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 75b7540772c9eeb068cb9711bf9b83b5572d34b0c4cd06108f6cc6d87e8e41ef 3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=75b7540772c9eeb068cb9711bf9b83b5572d34b0c4cd06108f6cc6d87e8e41ef 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NTx 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NTx 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.NTx 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1777984 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1777984 ']' 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.638 18:17:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xA2 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.mtq ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mtq 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oCo 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.L1I ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L1I 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Q22 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.pLS ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pLS 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.0jO 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xV3 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xV3 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.NTx 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:21.896 18:17:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:26.084 Waiting for block devices as requested 00:25:26.084 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:26.084 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:26.343 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:26.343 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:26.343 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:26.601 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:26.601 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:26.601 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:26.859 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:27.426 No valid GPT data, bailing 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:27.426 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:25:27.685 00:25:27.685 Discovery Log Number of Records 2, Generation counter 2 00:25:27.685 =====Discovery Log Entry 0====== 00:25:27.685 trtype: rdma 00:25:27.685 adrfam: ipv4 00:25:27.685 subtype: current discovery subsystem 00:25:27.685 treq: not specified, sq flow control disable supported 00:25:27.685 portid: 1 00:25:27.685 trsvcid: 4420 00:25:27.685 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:27.685 traddr: 192.168.100.8 00:25:27.685 eflags: none 00:25:27.685 rdma_prtype: not specified 00:25:27.685 rdma_qptype: connected 00:25:27.685 rdma_cms: rdma-cm 00:25:27.685 rdma_pkey: 0x0000 00:25:27.685 =====Discovery Log Entry 1====== 00:25:27.685 trtype: rdma 00:25:27.685 adrfam: ipv4 00:25:27.685 subtype: nvme subsystem 00:25:27.685 treq: not specified, sq flow control disable supported 00:25:27.685 portid: 1 00:25:27.685 trsvcid: 4420 00:25:27.685 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:27.685 traddr: 192.168.100.8 00:25:27.685 eflags: none 00:25:27.685 rdma_prtype: not specified 00:25:27.685 rdma_qptype: connected 00:25:27.685 rdma_cms: rdma-cm 00:25:27.685 rdma_pkey: 0x0000 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:27.685 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.686 18:17:27 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.944 nvme0n1 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:27.944 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.945 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.204 nvme0n1 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.204 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.463 nvme0n1 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.463 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.722 18:17:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.722 nvme0n1 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.722 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 nvme0n1 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.981 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:29.240 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.241 nvme0n1 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.241 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.530 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.789 nvme0n1 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.789 18:17:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:29.789 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.790 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.790 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.049 nvme0n1 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.049 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.308 nvme0n1 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.308 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.309 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.568 nvme0n1 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.568 18:17:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.826 nvme0n1 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.826 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:30.827 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.085 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.086 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.344 nvme0n1 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.344 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.345 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.603 nvme0n1 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.603 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.604 18:17:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.862 nvme0n1 00:25:31.862 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.862 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.862 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.862 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.121 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.380 nvme0n1 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.380 18:17:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.639 nvme0n1 00:25:32.639 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.639 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.639 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.639 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.639 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.898 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.157 nvme0n1 00:25:33.157 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.157 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.157 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.157 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.157 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.157 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.416 18:17:33 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.675 nvme0n1 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.675 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.934 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.193 nvme0n1 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.193 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.452 18:17:34 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.710 nvme0n1 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.710 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.969 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.227 nvme0n1 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.227 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.486 18:17:35 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.051 nvme0n1 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:36.051 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.052 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.617 nvme0n1 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.617 18:17:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.876 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.442 nvme0n1 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.442 18:17:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.009 nvme0n1 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:38.009 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:38.267 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:38.267 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:38.267 18:17:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:38.267 18:17:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.267 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.267 18:17:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.836 nvme0n1 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.836 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 nvme0n1 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.110 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 nvme0n1 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.370 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.629 nvme0n1 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.629 18:17:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.887 nvme0n1 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.887 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.888 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.146 nvme0n1 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.146 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.405 nvme0n1 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.405 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.664 nvme0n1 00:25:40.664 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.664 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.664 18:17:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.664 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.664 18:17:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.664 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.923 nvme0n1 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.923 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.183 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.443 nvme0n1 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.443 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.702 nvme0n1 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.702 18:17:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.965 nvme0n1 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.965 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.296 nvme0n1 00:25:42.296 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.296 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.296 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.296 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.296 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.296 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.555 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.556 18:17:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.815 nvme0n1 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.815 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.073 nvme0n1 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.073 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.331 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.590 nvme0n1 00:25:43.590 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.590 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.590 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.590 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.590 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.590 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.591 18:17:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.160 nvme0n1 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.160 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.418 nvme0n1 00:25:44.418 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.418 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.418 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.418 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.418 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.418 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.676 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.677 18:17:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.936 nvme0n1 00:25:44.936 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.936 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.936 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.936 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.936 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.936 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.195 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.454 nvme0n1 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.454 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.713 18:17:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.972 nvme0n1 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:45.972 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.230 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 nvme0n1 00:25:46.799 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.799 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.799 18:17:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.799 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.799 18:17:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.799 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.366 nvme0n1 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.366 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:47.367 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:47.624 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:47.624 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:47.624 18:17:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:47.624 18:17:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.624 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.624 18:17:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.192 nvme0n1 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.192 18:17:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.759 nvme0n1 00:25:48.759 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.759 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.759 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.759 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.759 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.759 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:48.760 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:49.018 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.018 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.018 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.584 nvme0n1 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.584 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.585 18:17:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 nvme0n1 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:49.844 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:49.845 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.845 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.845 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.103 nvme0n1 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.104 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.363 nvme0n1 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.363 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.364 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.650 nvme0n1 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.650 18:17:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 nvme0n1 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.910 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.168 nvme0n1 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.169 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.427 nvme0n1 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.427 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.685 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.686 18:17:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.686 nvme0n1 00:25:51.686 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.686 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.686 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.686 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.686 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.686 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.945 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.204 nvme0n1 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:52.204 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:52.205 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.205 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.205 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.463 nvme0n1 00:25:52.463 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.464 18:17:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 nvme0n1 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.723 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.983 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.243 nvme0n1 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.243 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.503 nvme0n1 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.503 18:17:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.071 nvme0n1 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.071 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.331 nvme0n1 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.331 18:17:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.963 nvme0n1 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:54.963 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.964 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 nvme0n1 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.225 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:55.484 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.485 18:17:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.744 nvme0n1 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.744 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.003 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.262 nvme0n1 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.262 18:17:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.828 nvme0n1 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWRiMDljOWMwMDdmOTg5NDY5NmE2N2Q4ZTg5MzgxZjTlQSwn: 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzE1NTdjMzE1Yzk0ZGFhMjY3MGM1M2VmZmExZTU2NzYyZmMyNmQyOGIxOGY4YWNmYWM1ZWY4ZWZmODkzOTE2OV7/Dok=: 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.828 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.395 nvme0n1 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.395 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.653 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.654 18:17:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.221 nvme0n1 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWU4YWIzOWExNjllNGU0MjIwOTI5NGFjNTE2ZjA3N2Qp5bO8: 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDI5YzdjNmM5MDY3YzI5YTU4YzljYzI5MzlmNWY4MTkHi+Tf: 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.221 18:17:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.788 nvme0n1 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzNhNjViNjhkOTQ2ZGIyYTQxYjZiYTU1YzY1NTBmZGRjZjNkZDcxMDQ0MzlmYzA5HoRQOQ==: 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: ]] 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmFiOWMxZmJmMjljODA3NjY3ODUxOGIyYjRiNDBiYThzCgrm: 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.788 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.046 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.612 nvme0n1 00:25:59.612 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.612 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.612 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.612 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.612 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.612 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzViNzU0MDc3MmM5ZWViMDY4Y2I5NzExYmY5YjgzYjU1NzJkMzRiMGM0Y2QwNjEwOGY2Y2M2ZDg3ZThlNDFlZnGG3QE=: 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:59.613 18:17:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.179 nvme0n1 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.179 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDUxMDViODY3ODIzZTM0ZGJhNGM4MGM1MWQ5OWQ2MDFlZGJjMTAwYzc5Yzg2YzczAq+9Qg==: 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: ]] 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGMxZmU5NDc0NDkwNTIxZmI0ZmY2OTYwYmNlNjlmNzM2M2FmY2M5NTYwODFmMjNkgOpxVg==: 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.180 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.440 request: 00:26:00.440 { 00:26:00.440 "name": "nvme0", 00:26:00.440 "trtype": "rdma", 00:26:00.440 "traddr": "192.168.100.8", 00:26:00.440 "adrfam": "ipv4", 00:26:00.440 "trsvcid": "4420", 00:26:00.440 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:00.440 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:00.440 "prchk_reftag": false, 00:26:00.440 "prchk_guard": false, 00:26:00.440 "hdgst": false, 00:26:00.440 "ddgst": false, 00:26:00.440 "method": "bdev_nvme_attach_controller", 00:26:00.440 "req_id": 1 00:26:00.440 } 00:26:00.440 Got JSON-RPC error response 00:26:00.440 response: 00:26:00.440 { 00:26:00.440 "code": -5, 00:26:00.440 "message": "Input/output error" 00:26:00.440 } 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.440 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.440 request: 00:26:00.440 { 00:26:00.440 "name": "nvme0", 00:26:00.440 "trtype": "rdma", 00:26:00.440 "traddr": "192.168.100.8", 00:26:00.440 "adrfam": "ipv4", 00:26:00.440 "trsvcid": "4420", 00:26:00.440 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:00.440 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:00.440 "prchk_reftag": false, 00:26:00.440 "prchk_guard": false, 00:26:00.440 "hdgst": false, 00:26:00.440 "ddgst": false, 00:26:00.440 "dhchap_key": "key2", 00:26:00.440 "method": "bdev_nvme_attach_controller", 00:26:00.440 "req_id": 1 00:26:00.440 } 00:26:00.440 Got JSON-RPC error response 00:26:00.699 response: 00:26:00.699 { 00:26:00.699 "code": -5, 00:26:00.699 "message": "Input/output error" 00:26:00.699 } 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.699 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.699 request: 00:26:00.699 { 00:26:00.699 "name": "nvme0", 00:26:00.699 "trtype": "rdma", 00:26:00.699 "traddr": "192.168.100.8", 00:26:00.699 "adrfam": "ipv4", 00:26:00.699 "trsvcid": "4420", 00:26:00.699 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:00.699 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:00.700 "prchk_reftag": false, 00:26:00.700 "prchk_guard": false, 00:26:00.700 "hdgst": false, 00:26:00.700 "ddgst": false, 00:26:00.700 "dhchap_key": "key1", 00:26:00.700 "dhchap_ctrlr_key": "ckey2", 00:26:00.700 "method": "bdev_nvme_attach_controller", 00:26:00.700 "req_id": 1 00:26:00.700 } 00:26:00.700 Got JSON-RPC error response 00:26:00.700 response: 00:26:00.700 { 00:26:00.700 "code": -5, 00:26:00.700 "message": "Input/output error" 00:26:00.700 } 00:26:00.700 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:00.700 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:00.700 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:00.700 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:00.700 18:18:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:00.700 18:18:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:00.700 rmmod nvme_rdma 00:26:00.700 rmmod nvme_fabrics 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1777984 ']' 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1777984 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1777984 ']' 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1777984 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777984 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777984' 00:26:00.700 killing process with pid 1777984 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1777984 00:26:00.700 18:18:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1777984 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:00.959 18:18:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:05.149 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:05.149 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:07.063 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:07.063 18:18:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xA2 /tmp/spdk.key-null.oCo /tmp/spdk.key-sha256.Q22 /tmp/spdk.key-sha384.0jO /tmp/spdk.key-sha512.NTx /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:26:07.063 18:18:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:10.354 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:10.354 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:10.614 00:26:10.614 real 0m58.798s 00:26:10.614 user 0m50.064s 00:26:10.614 sys 0m17.093s 00:26:10.614 18:18:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:10.614 18:18:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.614 ************************************ 00:26:10.614 END TEST nvmf_auth_host 00:26:10.614 ************************************ 00:26:10.614 18:18:10 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:26:10.614 18:18:10 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:26:10.614 18:18:10 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:10.614 18:18:10 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:10.614 18:18:10 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:10.614 18:18:10 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:10.614 18:18:10 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:10.614 18:18:10 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.614 18:18:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:10.614 ************************************ 00:26:10.614 START TEST nvmf_bdevperf 00:26:10.614 ************************************ 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:10.614 * Looking for test storage... 00:26:10.614 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.614 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.615 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.615 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.615 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.615 18:18:10 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.615 18:18:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.874 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.874 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.874 18:18:11 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.874 18:18:11 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:19.070 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:19.070 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:19.070 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:19.070 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.070 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:19.071 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:19.071 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:19.071 altname enp217s0f0np0 00:26:19.071 altname ens818f0np0 00:26:19.071 inet 192.168.100.8/24 scope global mlx_0_0 00:26:19.071 valid_lft forever preferred_lft forever 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:19.071 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:19.071 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:19.071 altname enp217s0f1np1 00:26:19.071 altname ens818f1np1 00:26:19.071 inet 192.168.100.9/24 scope global mlx_0_1 00:26:19.071 valid_lft forever preferred_lft forever 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:19.071 192.168.100.9' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:19.071 192.168.100.9' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:19.071 192.168.100.9' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1794069 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1794069 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1794069 ']' 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.071 18:18:18 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:19.071 [2024-07-15 18:18:18.817955] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:26:19.071 [2024-07-15 18:18:18.818019] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.071 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.071 [2024-07-15 18:18:18.900817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:19.071 [2024-07-15 18:18:18.974880] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.071 [2024-07-15 18:18:18.974914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.071 [2024-07-15 18:18:18.974924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.071 [2024-07-15 18:18:18.974933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.071 [2024-07-15 18:18:18.974940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.072 [2024-07-15 18:18:18.974986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.072 [2024-07-15 18:18:18.975006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.072 [2024-07-15 18:18:18.975007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.330 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.330 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:19.330 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:19.331 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:19.331 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.331 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.331 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:19.331 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.331 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.331 [2024-07-15 18:18:19.695650] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c91500/0x1c959f0) succeed. 00:26:19.331 [2024-07-15 18:18:19.704984] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c92aa0/0x1cd7080) succeed. 00:26:19.589 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.589 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:19.589 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.589 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.589 Malloc0 00:26:19.589 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.590 [2024-07-15 18:18:19.846511] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:19.590 { 00:26:19.590 "params": { 00:26:19.590 "name": "Nvme$subsystem", 00:26:19.590 "trtype": "$TEST_TRANSPORT", 00:26:19.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.590 "adrfam": "ipv4", 00:26:19.590 "trsvcid": "$NVMF_PORT", 00:26:19.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.590 "hdgst": ${hdgst:-false}, 00:26:19.590 "ddgst": ${ddgst:-false} 00:26:19.590 }, 00:26:19.590 "method": "bdev_nvme_attach_controller" 00:26:19.590 } 00:26:19.590 EOF 00:26:19.590 )") 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:19.590 18:18:19 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:19.590 "params": { 00:26:19.590 "name": "Nvme1", 00:26:19.590 "trtype": "rdma", 00:26:19.590 "traddr": "192.168.100.8", 00:26:19.590 "adrfam": "ipv4", 00:26:19.590 "trsvcid": "4420", 00:26:19.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:19.590 "hdgst": false, 00:26:19.590 "ddgst": false 00:26:19.590 }, 00:26:19.590 "method": "bdev_nvme_attach_controller" 00:26:19.590 }' 00:26:19.590 [2024-07-15 18:18:19.898866] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:26:19.590 [2024-07-15 18:18:19.898914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794217 ] 00:26:19.590 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.590 [2024-07-15 18:18:19.981584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.848 [2024-07-15 18:18:20.066037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.848 Running I/O for 1 seconds... 00:26:21.223 00:26:21.223 Latency(us) 00:26:21.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:21.223 Verification LBA range: start 0x0 length 0x4000 00:26:21.223 Nvme1n1 : 1.00 18485.39 72.21 0.00 0.00 6886.82 2503.48 11744.05 00:26:21.223 =================================================================================================================== 00:26:21.223 Total : 18485.39 72.21 0.00 0.00 6886.82 2503.48 11744.05 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1794490 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.223 { 00:26:21.223 "params": { 00:26:21.223 "name": "Nvme$subsystem", 00:26:21.223 "trtype": "$TEST_TRANSPORT", 00:26:21.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.223 "adrfam": "ipv4", 00:26:21.223 "trsvcid": "$NVMF_PORT", 00:26:21.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.223 "hdgst": ${hdgst:-false}, 00:26:21.223 "ddgst": ${ddgst:-false} 00:26:21.223 }, 00:26:21.223 "method": "bdev_nvme_attach_controller" 00:26:21.223 } 00:26:21.223 EOF 00:26:21.223 )") 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:21.223 18:18:21 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:21.223 "params": { 00:26:21.223 "name": "Nvme1", 00:26:21.223 "trtype": "rdma", 00:26:21.223 "traddr": "192.168.100.8", 00:26:21.223 "adrfam": "ipv4", 00:26:21.223 "trsvcid": "4420", 00:26:21.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.223 "hdgst": false, 00:26:21.223 "ddgst": false 00:26:21.223 }, 00:26:21.223 "method": "bdev_nvme_attach_controller" 00:26:21.223 }' 00:26:21.223 [2024-07-15 18:18:21.507157] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:26:21.223 [2024-07-15 18:18:21.507209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794490 ] 00:26:21.223 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.223 [2024-07-15 18:18:21.591981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.481 [2024-07-15 18:18:21.657026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.481 Running I/O for 15 seconds... 00:26:24.765 18:18:24 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1794069 00:26:24.765 18:18:24 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:25.335 [2024-07-15 18:18:25.491738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.491990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.491999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.335 [2024-07-15 18:18:25.492226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.335 [2024-07-15 18:18:25.492235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.336 [2024-07-15 18:18:25.492981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.336 [2024-07-15 18:18:25.492991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.337 [2024-07-15 18:18:25.493769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x182b00 00:26:25.337 [2024-07-15 18:18:25.493778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.493987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.493997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.494256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182b00 00:26:25.338 [2024-07-15 18:18:25.494265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6f81e000 sqhd:52b0 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.496307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:25.338 [2024-07-15 18:18:25.496346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:25.338 [2024-07-15 18:18:25.496375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127480 len:8 PRP1 0x0 PRP2 0x0 00:26:25.338 [2024-07-15 18:18:25.496409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.338 [2024-07-15 18:18:25.496473] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:26:25.338 [2024-07-15 18:18:25.499081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:25.338 [2024-07-15 18:18:25.512752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:25.338 [2024-07-15 18:18:25.515223] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:25.338 [2024-07-15 18:18:25.515244] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:25.338 [2024-07-15 18:18:25.515252] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:26.274 [2024-07-15 18:18:26.519005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:26.274 [2024-07-15 18:18:26.519032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:26.274 [2024-07-15 18:18:26.519204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:26.274 [2024-07-15 18:18:26.519217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:26.274 [2024-07-15 18:18:26.519228] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:26.274 [2024-07-15 18:18:26.521900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:26.274 [2024-07-15 18:18:26.527708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:26.274 [2024-07-15 18:18:26.530147] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:26.274 [2024-07-15 18:18:26.530168] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:26.274 [2024-07-15 18:18:26.530177] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:27.212 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1794069 Killed "${NVMF_APP[@]}" "$@" 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1795550 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1795550 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1795550 ']' 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.212 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.213 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.213 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.213 18:18:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.213 [2024-07-15 18:18:27.527976] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:26:27.213 [2024-07-15 18:18:27.528033] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.213 [2024-07-15 18:18:27.534037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:27.213 [2024-07-15 18:18:27.534060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:27.213 [2024-07-15 18:18:27.534232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:27.213 [2024-07-15 18:18:27.534244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:27.213 [2024-07-15 18:18:27.534256] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:27.213 [2024-07-15 18:18:27.536913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:27.213 [2024-07-15 18:18:27.540289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:27.213 [2024-07-15 18:18:27.542701] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:27.213 [2024-07-15 18:18:27.542721] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:27.213 [2024-07-15 18:18:27.542730] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:27.213 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.498 [2024-07-15 18:18:27.612538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.498 [2024-07-15 18:18:27.686054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.498 [2024-07-15 18:18:27.686092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.498 [2024-07-15 18:18:27.686106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.498 [2024-07-15 18:18:27.686114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.498 [2024-07-15 18:18:27.686122] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.498 [2024-07-15 18:18:27.686169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.498 [2024-07-15 18:18:27.686258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.498 [2024-07-15 18:18:27.686260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.065 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.065 [2024-07-15 18:18:28.400718] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x95f500/0x9639f0) succeed. 00:26:28.065 [2024-07-15 18:18:28.410032] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x960aa0/0x9a5080) succeed. 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.324 Malloc0 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.324 [2024-07-15 18:18:28.546635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:28.324 [2024-07-15 18:18:28.546670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:28.324 [2024-07-15 18:18:28.546845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:28.324 [2024-07-15 18:18:28.546857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:28.324 [2024-07-15 18:18:28.546867] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:28.324 [2024-07-15 18:18:28.546886] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:28.324 [2024-07-15 18:18:28.549554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:28.324 [2024-07-15 18:18:28.556135] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:28.324 [2024-07-15 18:18:28.559792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.324 18:18:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1794490 00:26:28.324 [2024-07-15 18:18:28.599005] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:38.317 00:26:38.317 Latency(us) 00:26:38.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.317 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:38.317 Verification LBA range: start 0x0 length 0x4000 00:26:38.317 Nvme1n1 : 15.00 13380.14 52.27 10681.90 0.00 5300.75 345.70 1033476.51 00:26:38.317 =================================================================================================================== 00:26:38.317 Total : 13380.14 52.27 10681.90 0.00 5300.75 345.70 1033476.51 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.317 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:38.318 rmmod nvme_rdma 00:26:38.318 rmmod nvme_fabrics 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1795550 ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1795550 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1795550 ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1795550 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1795550 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1795550' 00:26:38.318 killing process with pid 1795550 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1795550 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1795550 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:38.318 00:26:38.318 real 0m26.569s 00:26:38.318 user 1m4.659s 00:26:38.318 sys 0m7.179s 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:38.318 18:18:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:38.318 ************************************ 00:26:38.318 END TEST nvmf_bdevperf 00:26:38.318 ************************************ 00:26:38.318 18:18:37 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:26:38.318 18:18:37 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:26:38.318 18:18:37 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:38.318 18:18:37 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.318 18:18:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:38.318 ************************************ 00:26:38.318 START TEST nvmf_target_disconnect 00:26:38.318 ************************************ 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:26:38.318 * Looking for test storage... 00:26:38.318 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:38.318 18:18:37 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:46.466 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:46.466 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:46.466 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:46.466 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:46.466 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:46.467 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:46.467 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:46.467 altname enp217s0f0np0 00:26:46.467 altname ens818f0np0 00:26:46.467 inet 192.168.100.8/24 scope global mlx_0_0 00:26:46.467 valid_lft forever preferred_lft forever 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:46.467 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:46.467 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:46.467 altname enp217s0f1np1 00:26:46.467 altname ens818f1np1 00:26:46.467 inet 192.168.100.9/24 scope global mlx_0_1 00:26:46.467 valid_lft forever preferred_lft forever 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:46.467 192.168.100.9' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:46.467 192.168.100.9' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:46.467 192.168.100.9' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:46.467 ************************************ 00:26:46.467 START TEST nvmf_target_disconnect_tc1 00:26:46.467 ************************************ 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:26:46.467 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:26:46.468 18:18:45 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:46.468 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.468 [2024-07-15 18:18:45.804746] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:26:46.468 [2024-07-15 18:18:45.804852] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:26:46.468 [2024-07-15 18:18:45.804883] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:26:46.468 [2024-07-15 18:18:46.809076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:46.468 [2024-07-15 18:18:46.809139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:46.468 [2024-07-15 18:18:46.809174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:26:46.468 [2024-07-15 18:18:46.809238] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:46.468 [2024-07-15 18:18:46.809268] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:46.468 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:26:46.468 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:46.468 Initializing NVMe Controllers 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:46.468 00:26:46.468 real 0m1.146s 00:26:46.468 user 0m0.856s 00:26:46.468 sys 0m0.279s 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:46.468 ************************************ 00:26:46.468 END TEST nvmf_target_disconnect_tc1 00:26:46.468 ************************************ 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.468 18:18:46 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:46.727 ************************************ 00:26:46.727 START TEST nvmf_target_disconnect_tc2 00:26:46.727 ************************************ 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1801356 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1801356 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1801356 ']' 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:46.727 18:18:46 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.727 [2024-07-15 18:18:46.934443] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:26:46.727 [2024-07-15 18:18:46.934488] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.727 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.727 [2024-07-15 18:18:47.030837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.727 [2024-07-15 18:18:47.103574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.727 [2024-07-15 18:18:47.103617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.727 [2024-07-15 18:18:47.103627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.727 [2024-07-15 18:18:47.103635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.727 [2024-07-15 18:18:47.103642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.727 [2024-07-15 18:18:47.103761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:46.727 [2024-07-15 18:18:47.103870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:46.727 [2024-07-15 18:18:47.103980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:46.727 [2024-07-15 18:18:47.103981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 Malloc0 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 [2024-07-15 18:18:47.833092] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x239ffc0/0x23abb40) succeed. 00:26:47.664 [2024-07-15 18:18:47.842576] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23a1600/0x23ed1d0) succeed. 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 [2024-07-15 18:18:47.979872] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1801641 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:47.664 18:18:47 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:47.664 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.197 18:18:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1801356 00:26:50.197 18:18:49 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Write completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.134 Read completed with error (sct=0, sc=8) 00:26:51.134 starting I/O failed 00:26:51.135 Write completed with error (sct=0, sc=8) 00:26:51.135 starting I/O failed 00:26:51.135 Write completed with error (sct=0, sc=8) 00:26:51.135 starting I/O failed 00:26:51.135 Write completed with error (sct=0, sc=8) 00:26:51.135 starting I/O failed 00:26:51.135 [2024-07-15 18:18:51.194285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:51.703 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1801356 Killed "${NVMF_APP[@]}" "$@" 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1802189 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1802189 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1802189 ']' 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:51.703 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:51.703 [2024-07-15 18:18:52.058147] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:26:51.703 [2024-07-15 18:18:52.058198] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.703 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.962 [2024-07-15 18:18:52.160273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Read completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 Write completed with error (sct=0, sc=8) 00:26:51.962 starting I/O failed 00:26:51.962 [2024-07-15 18:18:52.199674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.962 [2024-07-15 18:18:52.232599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.962 [2024-07-15 18:18:52.232636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.962 [2024-07-15 18:18:52.232645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.962 [2024-07-15 18:18:52.232654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.962 [2024-07-15 18:18:52.232661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.962 [2024-07-15 18:18:52.232796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:51.962 [2024-07-15 18:18:52.232904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:51.962 [2024-07-15 18:18:52.233074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:51.962 [2024-07-15 18:18:52.233074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.528 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.785 Malloc0 00:26:52.785 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.785 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:26:52.785 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.785 18:18:52 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.785 [2024-07-15 18:18:52.961021] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22c9fc0/0x22d5b40) succeed. 00:26:52.785 [2024-07-15 18:18:52.970586] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22cb600/0x23171d0) succeed. 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.785 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.785 [2024-07-15 18:18:53.108625] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:52.786 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.786 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:26:52.786 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.786 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.786 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.786 18:18:53 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1801641 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Read completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 Write completed with error (sct=0, sc=8) 00:26:53.044 starting I/O failed 00:26:53.044 [2024-07-15 18:18:53.204710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 [2024-07-15 18:18:53.216757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.216810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.216830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.216841] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.044 [2024-07-15 18:18:53.216851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.044 [2024-07-15 18:18:53.226949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 qpair failed and we were unable to recover it. 00:26:53.044 [2024-07-15 18:18:53.236616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.236660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.236678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.236688] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.044 [2024-07-15 18:18:53.236698] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.044 [2024-07-15 18:18:53.247056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 qpair failed and we were unable to recover it. 00:26:53.044 [2024-07-15 18:18:53.256751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.256788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.256806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.256816] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.044 [2024-07-15 18:18:53.256825] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.044 [2024-07-15 18:18:53.267075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 qpair failed and we were unable to recover it. 00:26:53.044 [2024-07-15 18:18:53.276815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.276866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.276885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.276895] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.044 [2024-07-15 18:18:53.276905] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.044 [2024-07-15 18:18:53.287153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 qpair failed and we were unable to recover it. 00:26:53.044 [2024-07-15 18:18:53.296831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.296875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.296892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.296901] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.044 [2024-07-15 18:18:53.296910] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.044 [2024-07-15 18:18:53.307134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 qpair failed and we were unable to recover it. 00:26:53.044 [2024-07-15 18:18:53.316888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.316924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.316941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.316951] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.044 [2024-07-15 18:18:53.316960] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.044 [2024-07-15 18:18:53.327229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.044 qpair failed and we were unable to recover it. 00:26:53.044 [2024-07-15 18:18:53.336942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.044 [2024-07-15 18:18:53.336982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.044 [2024-07-15 18:18:53.336999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.044 [2024-07-15 18:18:53.337009] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.045 [2024-07-15 18:18:53.337023] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.045 [2024-07-15 18:18:53.347242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.045 qpair failed and we were unable to recover it. 00:26:53.045 [2024-07-15 18:18:53.356992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.045 [2024-07-15 18:18:53.357035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.045 [2024-07-15 18:18:53.357053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.045 [2024-07-15 18:18:53.357062] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.045 [2024-07-15 18:18:53.357071] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.045 [2024-07-15 18:18:53.367165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.045 qpair failed and we were unable to recover it. 00:26:53.045 [2024-07-15 18:18:53.377146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.045 [2024-07-15 18:18:53.377191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.045 [2024-07-15 18:18:53.377208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.045 [2024-07-15 18:18:53.377217] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.045 [2024-07-15 18:18:53.377227] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.045 [2024-07-15 18:18:53.387553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.045 qpair failed and we were unable to recover it. 00:26:53.045 [2024-07-15 18:18:53.397338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.045 [2024-07-15 18:18:53.397376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.045 [2024-07-15 18:18:53.397392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.045 [2024-07-15 18:18:53.397402] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.045 [2024-07-15 18:18:53.397411] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.045 [2024-07-15 18:18:53.407721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.045 qpair failed and we were unable to recover it. 00:26:53.045 [2024-07-15 18:18:53.417232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.045 [2024-07-15 18:18:53.417269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.045 [2024-07-15 18:18:53.417285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.045 [2024-07-15 18:18:53.417294] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.045 [2024-07-15 18:18:53.417303] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.045 [2024-07-15 18:18:53.427728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.045 qpair failed and we were unable to recover it. 00:26:53.045 [2024-07-15 18:18:53.437272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.045 [2024-07-15 18:18:53.437314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.045 [2024-07-15 18:18:53.437330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.045 [2024-07-15 18:18:53.437340] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.045 [2024-07-15 18:18:53.437349] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.447946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.457462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.457509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.304 [2024-07-15 18:18:53.457527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.304 [2024-07-15 18:18:53.457537] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.304 [2024-07-15 18:18:53.457546] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.467876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.477599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.477638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.304 [2024-07-15 18:18:53.477655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.304 [2024-07-15 18:18:53.477664] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.304 [2024-07-15 18:18:53.477673] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.487887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.497578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.497616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.304 [2024-07-15 18:18:53.497633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.304 [2024-07-15 18:18:53.497642] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.304 [2024-07-15 18:18:53.497651] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.508102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.517681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.517720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.304 [2024-07-15 18:18:53.517737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.304 [2024-07-15 18:18:53.517747] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.304 [2024-07-15 18:18:53.517756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.528189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.537664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.537702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.304 [2024-07-15 18:18:53.537718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.304 [2024-07-15 18:18:53.537728] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.304 [2024-07-15 18:18:53.537740] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.548222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.557738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.557776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.304 [2024-07-15 18:18:53.557793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.304 [2024-07-15 18:18:53.557803] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.304 [2024-07-15 18:18:53.557812] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.304 [2024-07-15 18:18:53.568128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-15 18:18:53.577759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.304 [2024-07-15 18:18:53.577800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.577817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.577827] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.577836] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.305 [2024-07-15 18:18:53.588338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-15 18:18:53.597791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.305 [2024-07-15 18:18:53.597828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.597844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.597854] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.597862] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.305 [2024-07-15 18:18:53.608342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-15 18:18:53.617850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.305 [2024-07-15 18:18:53.617887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.617903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.617913] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.617921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.305 [2024-07-15 18:18:53.628279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-15 18:18:53.637855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.305 [2024-07-15 18:18:53.637894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.637911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.637921] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.637929] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.305 [2024-07-15 18:18:53.648449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-15 18:18:53.657946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.305 [2024-07-15 18:18:53.657983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.658001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.658017] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.658026] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.305 [2024-07-15 18:18:53.668395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-15 18:18:53.678037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.305 [2024-07-15 18:18:53.678075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.678091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.678101] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.678109] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.305 [2024-07-15 18:18:53.688735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-15 18:18:53.698103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.305 [2024-07-15 18:18:53.698153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.305 [2024-07-15 18:18:53.698169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.305 [2024-07-15 18:18:53.698179] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.305 [2024-07-15 18:18:53.698188] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.564 [2024-07-15 18:18:53.708577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.564 qpair failed and we were unable to recover it. 00:26:53.564 [2024-07-15 18:18:53.718213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.564 [2024-07-15 18:18:53.718252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.564 [2024-07-15 18:18:53.718273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.564 [2024-07-15 18:18:53.718282] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.564 [2024-07-15 18:18:53.718291] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.564 [2024-07-15 18:18:53.728615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.564 qpair failed and we were unable to recover it. 00:26:53.564 [2024-07-15 18:18:53.738192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.738233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.738250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.738259] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.738269] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.748781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.758281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.758319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.758337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.758347] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.758355] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.768858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.778389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.778435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.778452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.778463] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.778474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.788885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.798426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.798460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.798477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.798486] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.798495] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.808687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.818470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.818507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.818523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.818533] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.818541] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.828882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.838531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.838571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.838587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.838596] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.838605] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.848969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.858583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.858625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.858642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.858652] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.858661] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.869063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.878578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.878616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.878632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.878642] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.878651] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.889047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.898792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.898833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.898850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.898860] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.898869] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.908827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.918658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.918697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.918713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.918723] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.918732] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.929095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.938736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.938782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.938798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.938808] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.938817] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.565 [2024-07-15 18:18:53.949418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.565 qpair failed and we were unable to recover it. 00:26:53.565 [2024-07-15 18:18:53.958783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.565 [2024-07-15 18:18:53.958822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.565 [2024-07-15 18:18:53.958840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.565 [2024-07-15 18:18:53.958850] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.565 [2024-07-15 18:18:53.958859] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.824 [2024-07-15 18:18:53.969137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-07-15 18:18:53.978950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.824 [2024-07-15 18:18:53.978989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.824 [2024-07-15 18:18:53.979006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.824 [2024-07-15 18:18:53.979024] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.824 [2024-07-15 18:18:53.979036] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.824 [2024-07-15 18:18:53.989217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-07-15 18:18:53.998989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.824 [2024-07-15 18:18:53.999033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.824 [2024-07-15 18:18:53.999049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.824 [2024-07-15 18:18:53.999059] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.824 [2024-07-15 18:18:53.999068] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.824 [2024-07-15 18:18:54.009183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-07-15 18:18:54.019061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.824 [2024-07-15 18:18:54.019103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.824 [2024-07-15 18:18:54.019119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.824 [2024-07-15 18:18:54.019129] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.824 [2024-07-15 18:18:54.019138] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.824 [2024-07-15 18:18:54.029164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-07-15 18:18:54.038925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.824 [2024-07-15 18:18:54.038965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.824 [2024-07-15 18:18:54.038982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.824 [2024-07-15 18:18:54.038992] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.824 [2024-07-15 18:18:54.039001] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.824 [2024-07-15 18:18:54.049633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-07-15 18:18:54.059177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.824 [2024-07-15 18:18:54.059217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.824 [2024-07-15 18:18:54.059234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.824 [2024-07-15 18:18:54.059244] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.824 [2024-07-15 18:18:54.059253] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.824 [2024-07-15 18:18:54.069584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.824 qpair failed and we were unable to recover it. 00:26:53.824 [2024-07-15 18:18:54.079257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.824 [2024-07-15 18:18:54.079298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.824 [2024-07-15 18:18:54.079314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.079324] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.079333] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.089585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.099342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.099386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.099402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.099412] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.099421] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.109506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.119393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.119430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.119446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.119456] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.119465] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.129755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.139411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.139450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.139466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.139476] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.139485] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.149856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.159403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.159442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.159462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.159472] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.159481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.169638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.179474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.179518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.179534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.179543] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.179553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.189900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.199527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.199564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.199580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.199590] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.199599] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:53.825 [2024-07-15 18:18:54.209722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.825 qpair failed and we were unable to recover it. 00:26:53.825 [2024-07-15 18:18:54.219591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.825 [2024-07-15 18:18:54.219630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.825 [2024-07-15 18:18:54.219646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.825 [2024-07-15 18:18:54.219657] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.825 [2024-07-15 18:18:54.219666] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.082 [2024-07-15 18:18:54.229842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.239633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.239672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.239690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.239699] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.239708] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.249868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.259796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.259840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.259858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.259868] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.259877] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.270133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.279862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.279899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.279915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.279925] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.279934] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.290146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.299801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.299842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.299858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.299868] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.299876] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.310241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.319789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.319827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.319843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.319853] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.319862] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.330300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.339863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.339906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.339923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.339932] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.339941] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.350375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.359899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.359941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.359957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.359967] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.359976] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.370603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.380044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.380082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.380099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.380108] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.380117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.390614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.400180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.400218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.400234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.400244] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.400253] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.410526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.420174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.420214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.420230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.420243] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.420252] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.430755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.440221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.440264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.440281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.440290] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.440300] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.450765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.460235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.460277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.460294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.460304] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.460313] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.083 [2024-07-15 18:18:54.470601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.083 qpair failed and we were unable to recover it. 00:26:54.083 [2024-07-15 18:18:54.480204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.083 [2024-07-15 18:18:54.480242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.083 [2024-07-15 18:18:54.480258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.083 [2024-07-15 18:18:54.480268] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.083 [2024-07-15 18:18:54.480277] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.342 [2024-07-15 18:18:54.490617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.342 qpair failed and we were unable to recover it. 00:26:54.342 [2024-07-15 18:18:54.500285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.342 [2024-07-15 18:18:54.500328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.342 [2024-07-15 18:18:54.500344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.342 [2024-07-15 18:18:54.500354] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.342 [2024-07-15 18:18:54.500363] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.342 [2024-07-15 18:18:54.510614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.342 qpair failed and we were unable to recover it. 00:26:54.342 [2024-07-15 18:18:54.520401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.342 [2024-07-15 18:18:54.520440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.342 [2024-07-15 18:18:54.520457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.342 [2024-07-15 18:18:54.520466] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.342 [2024-07-15 18:18:54.520475] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.342 [2024-07-15 18:18:54.530668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.342 qpair failed and we were unable to recover it. 00:26:54.342 [2024-07-15 18:18:54.540408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.342 [2024-07-15 18:18:54.540449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.342 [2024-07-15 18:18:54.540465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.342 [2024-07-15 18:18:54.540475] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.342 [2024-07-15 18:18:54.540484] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.342 [2024-07-15 18:18:54.550789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.342 qpair failed and we were unable to recover it. 00:26:54.342 [2024-07-15 18:18:54.560508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.342 [2024-07-15 18:18:54.560546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.342 [2024-07-15 18:18:54.560562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.342 [2024-07-15 18:18:54.560572] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.342 [2024-07-15 18:18:54.560581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.342 [2024-07-15 18:18:54.570916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.342 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.580528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.580573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.580589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.580599] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.580608] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.590909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.600638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.600672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.600691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.600701] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.600710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.610803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.620738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.620771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.620788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.620797] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.620806] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.630986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.640642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.640682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.640699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.640708] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.640717] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.651089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.660800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.660839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.660856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.660865] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.660874] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.671163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.680811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.680846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.680863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.680872] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.680881] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.691228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.700893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.700932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.700949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.700959] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.700968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.711251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.720936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.720974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.720990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.721000] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.721009] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.343 [2024-07-15 18:18:54.731112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.343 qpair failed and we were unable to recover it. 00:26:54.343 [2024-07-15 18:18:54.741009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.343 [2024-07-15 18:18:54.741059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.343 [2024-07-15 18:18:54.741075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.343 [2024-07-15 18:18:54.741085] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.343 [2024-07-15 18:18:54.741094] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.602 [2024-07-15 18:18:54.751183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.602 qpair failed and we were unable to recover it. 00:26:54.602 [2024-07-15 18:18:54.761058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.602 [2024-07-15 18:18:54.761093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.602 [2024-07-15 18:18:54.761110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.602 [2024-07-15 18:18:54.761120] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.602 [2024-07-15 18:18:54.761129] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.602 [2024-07-15 18:18:54.771330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.602 qpair failed and we were unable to recover it. 00:26:54.602 [2024-07-15 18:18:54.781073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.781110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.781126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.781135] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.781144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.791532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.801263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.801300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.801316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.801326] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.801335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.811521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.821262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.821300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.821316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.821326] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.821335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.831619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.841243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.841280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.841296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.841305] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.841314] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.851800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.861403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.861441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.861457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.861470] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.861479] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.871807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.881381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.881418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.881434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.881444] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.881453] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.891702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.901544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.901586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.901602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.901612] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.901621] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.911876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.921580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.921614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.921630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.921639] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.921648] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.931908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.941624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.941663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.941680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.941690] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.941699] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.952009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.961654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.961693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.961709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.961719] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.961728] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.972013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:54.981608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:54.981646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:54.981662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:54.981672] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:54.981680] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.603 [2024-07-15 18:18:54.992056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.603 qpair failed and we were unable to recover it. 00:26:54.603 [2024-07-15 18:18:55.001735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.603 [2024-07-15 18:18:55.001770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.603 [2024-07-15 18:18:55.001786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.603 [2024-07-15 18:18:55.001796] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.603 [2024-07-15 18:18:55.001805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.012230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.021826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.021863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.021880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.021889] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.021898] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.032221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.041945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.041987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.042007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.042023] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.042032] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.052368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.061933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.061977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.061994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.062004] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.062018] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.072392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.082055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.082091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.082107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.082117] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.082126] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.092350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.102058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.102092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.102108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.102118] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.102127] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.112492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.122157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.122195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.122212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.122222] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.122231] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.132639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.142148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.142190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.142206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.142216] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.142225] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.152652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.162336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.162374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.862 [2024-07-15 18:18:55.162391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.862 [2024-07-15 18:18:55.162401] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.862 [2024-07-15 18:18:55.162410] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.862 [2024-07-15 18:18:55.172657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.862 qpair failed and we were unable to recover it. 00:26:54.862 [2024-07-15 18:18:55.182388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.862 [2024-07-15 18:18:55.182427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.863 [2024-07-15 18:18:55.182443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.863 [2024-07-15 18:18:55.182453] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.863 [2024-07-15 18:18:55.182461] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.863 [2024-07-15 18:18:55.192804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.863 qpair failed and we were unable to recover it. 00:26:54.863 [2024-07-15 18:18:55.202556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.863 [2024-07-15 18:18:55.202596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.863 [2024-07-15 18:18:55.202613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.863 [2024-07-15 18:18:55.202622] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.863 [2024-07-15 18:18:55.202631] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.863 [2024-07-15 18:18:55.212591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.863 qpair failed and we were unable to recover it. 00:26:54.863 [2024-07-15 18:18:55.222573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.863 [2024-07-15 18:18:55.222620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.863 [2024-07-15 18:18:55.222637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.863 [2024-07-15 18:18:55.222647] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.863 [2024-07-15 18:18:55.222656] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.863 [2024-07-15 18:18:55.232894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.863 qpair failed and we were unable to recover it. 00:26:54.863 [2024-07-15 18:18:55.242472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:54.863 [2024-07-15 18:18:55.242505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:54.863 [2024-07-15 18:18:55.242522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:54.863 [2024-07-15 18:18:55.242531] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:54.863 [2024-07-15 18:18:55.242540] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:54.863 [2024-07-15 18:18:55.252849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:54.863 qpair failed and we were unable to recover it. 00:26:55.121 [2024-07-15 18:18:55.262501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.121 [2024-07-15 18:18:55.262538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.121 [2024-07-15 18:18:55.262555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.121 [2024-07-15 18:18:55.262565] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.121 [2024-07-15 18:18:55.262574] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.121 [2024-07-15 18:18:55.273042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-07-15 18:18:55.282608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.121 [2024-07-15 18:18:55.282646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.121 [2024-07-15 18:18:55.282663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.121 [2024-07-15 18:18:55.282672] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.121 [2024-07-15 18:18:55.282681] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.121 [2024-07-15 18:18:55.292969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.121 [2024-07-15 18:18:55.302740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.121 [2024-07-15 18:18:55.302776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.121 [2024-07-15 18:18:55.302792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.121 [2024-07-15 18:18:55.302805] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.121 [2024-07-15 18:18:55.302813] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.121 [2024-07-15 18:18:55.313025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.121 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.322728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.322763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.322779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.322788] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.322797] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.333061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.342733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.342769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.342785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.342795] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.342804] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.353204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.362784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.362820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.362837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.362847] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.362855] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.373140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.382861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.382898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.382915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.382925] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.382933] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.393348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.402976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.403022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.403039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.403049] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.403058] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.413390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.423065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.423101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.423117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.423127] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.423136] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.433414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.443050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.443090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.443107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.443117] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.443126] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.453334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.463080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.463124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.463141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.463151] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.463160] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.473556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.483140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.483178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.483197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.483207] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.483216] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.493510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.122 [2024-07-15 18:18:55.503209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.122 [2024-07-15 18:18:55.503246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.122 [2024-07-15 18:18:55.503263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.122 [2024-07-15 18:18:55.503273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.122 [2024-07-15 18:18:55.503282] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.122 [2024-07-15 18:18:55.513735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.122 qpair failed and we were unable to recover it. 00:26:55.381 [2024-07-15 18:18:55.523263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.381 [2024-07-15 18:18:55.523304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.381 [2024-07-15 18:18:55.523320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.381 [2024-07-15 18:18:55.523330] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.381 [2024-07-15 18:18:55.523339] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.381 [2024-07-15 18:18:55.533666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.381 qpair failed and we were unable to recover it. 00:26:55.381 [2024-07-15 18:18:55.543479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.381 [2024-07-15 18:18:55.543520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.381 [2024-07-15 18:18:55.543537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.381 [2024-07-15 18:18:55.543546] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.381 [2024-07-15 18:18:55.543555] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.381 [2024-07-15 18:18:55.553820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.381 qpair failed and we were unable to recover it. 00:26:55.381 [2024-07-15 18:18:55.563465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.381 [2024-07-15 18:18:55.563502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.381 [2024-07-15 18:18:55.563519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.381 [2024-07-15 18:18:55.563529] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.381 [2024-07-15 18:18:55.563538] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.381 [2024-07-15 18:18:55.573897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.381 qpair failed and we were unable to recover it. 00:26:55.381 [2024-07-15 18:18:55.583414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.381 [2024-07-15 18:18:55.583456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.381 [2024-07-15 18:18:55.583473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.381 [2024-07-15 18:18:55.583482] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.381 [2024-07-15 18:18:55.583491] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.381 [2024-07-15 18:18:55.593819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.381 qpair failed and we were unable to recover it. 00:26:55.381 [2024-07-15 18:18:55.603562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.381 [2024-07-15 18:18:55.603602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.381 [2024-07-15 18:18:55.603618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.381 [2024-07-15 18:18:55.603628] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.603636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.614031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.623778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.623822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.623838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.623848] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.623857] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.634036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.643775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.643810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.643826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.643836] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.643844] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.654224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.663802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.663848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.663865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.663875] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.663884] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.674314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.683808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.683847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.683863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.683873] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.683881] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.694318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.703985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.704031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.704047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.704057] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.704066] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.714403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.724007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.724052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.724068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.724079] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.724088] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.734184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.744042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.744079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.744096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.744109] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.744117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.754390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.382 [2024-07-15 18:18:55.764159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.382 [2024-07-15 18:18:55.764201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.382 [2024-07-15 18:18:55.764218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.382 [2024-07-15 18:18:55.764229] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.382 [2024-07-15 18:18:55.764238] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.382 [2024-07-15 18:18:55.774419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.382 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-15 18:18:55.784054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.641 [2024-07-15 18:18:55.784096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.641 [2024-07-15 18:18:55.784112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.641 [2024-07-15 18:18:55.784122] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.641 [2024-07-15 18:18:55.784131] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.641 [2024-07-15 18:18:55.794699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-15 18:18:55.804212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.641 [2024-07-15 18:18:55.804252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.641 [2024-07-15 18:18:55.804269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.641 [2024-07-15 18:18:55.804278] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.641 [2024-07-15 18:18:55.804287] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.641 [2024-07-15 18:18:55.814407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-15 18:18:55.824450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.641 [2024-07-15 18:18:55.824483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.641 [2024-07-15 18:18:55.824499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.824509] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.824518] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.834859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.844462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.844502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.844518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.844528] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.844537] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.854706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.864484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.864522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.864539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.864549] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.864558] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.874692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.884628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.884671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.884687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.884697] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.884706] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.895021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.904591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.904628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.904644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.904654] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.904663] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.914793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.924562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.924600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.924620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.924629] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.924638] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.934983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.944696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.944735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.944751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.944761] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.944769] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.954965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.964801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.964836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.964852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.964862] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.964871] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.975048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:55.984801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:55.984837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:55.984853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:55.984862] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:55.984871] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:55.995070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:56.004800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:56.004839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:56.004855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:56.004865] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:56.004873] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:56.015183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-15 18:18:56.024816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.642 [2024-07-15 18:18:56.024860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.642 [2024-07-15 18:18:56.024876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.642 [2024-07-15 18:18:56.024885] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.642 [2024-07-15 18:18:56.024894] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.642 [2024-07-15 18:18:56.035394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.044874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.044920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.044936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.044946] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.044956] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.902 [2024-07-15 18:18:56.055481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.902 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.064934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.064973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.064989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.064999] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.065008] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.902 [2024-07-15 18:18:56.075589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.902 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.085089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.085127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.085143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.085152] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.085161] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.902 [2024-07-15 18:18:56.095478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.902 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.105054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.105099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.105115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.105125] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.105134] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.902 [2024-07-15 18:18:56.115463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.902 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.125101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.125144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.125160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.125170] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.125179] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.902 [2024-07-15 18:18:56.135661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.902 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.145197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.145231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.145247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.145257] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.145266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.902 [2024-07-15 18:18:56.155715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.902 qpair failed and we were unable to recover it. 00:26:55.902 [2024-07-15 18:18:56.165373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.902 [2024-07-15 18:18:56.165411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.902 [2024-07-15 18:18:56.165427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.902 [2024-07-15 18:18:56.165437] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.902 [2024-07-15 18:18:56.165445] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.175768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:55.903 [2024-07-15 18:18:56.185325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.903 [2024-07-15 18:18:56.185366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.903 [2024-07-15 18:18:56.185383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.903 [2024-07-15 18:18:56.185396] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.903 [2024-07-15 18:18:56.185405] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.195699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:55.903 [2024-07-15 18:18:56.205421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.903 [2024-07-15 18:18:56.205463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.903 [2024-07-15 18:18:56.205479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.903 [2024-07-15 18:18:56.205489] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.903 [2024-07-15 18:18:56.205498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.215900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:55.903 [2024-07-15 18:18:56.225356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.903 [2024-07-15 18:18:56.225394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.903 [2024-07-15 18:18:56.225410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.903 [2024-07-15 18:18:56.225420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.903 [2024-07-15 18:18:56.225429] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.235942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:55.903 [2024-07-15 18:18:56.245580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.903 [2024-07-15 18:18:56.245618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.903 [2024-07-15 18:18:56.245634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.903 [2024-07-15 18:18:56.245644] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.903 [2024-07-15 18:18:56.245653] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.255971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:55.903 [2024-07-15 18:18:56.265635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.903 [2024-07-15 18:18:56.265682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.903 [2024-07-15 18:18:56.265699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.903 [2024-07-15 18:18:56.265709] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.903 [2024-07-15 18:18:56.265718] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.275987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:55.903 [2024-07-15 18:18:56.285660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.903 [2024-07-15 18:18:56.285697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.903 [2024-07-15 18:18:56.285713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.903 [2024-07-15 18:18:56.285723] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.903 [2024-07-15 18:18:56.285732] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:55.903 [2024-07-15 18:18:56.296138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:55.903 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.305741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.305780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.305796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.305805] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.305814] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.316184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.325751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.325793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.325808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.325818] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.325827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.336174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.345824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.345863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.345879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.345889] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.345898] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.356221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.365855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.365891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.365913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.365923] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.365932] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.376315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.385818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.385858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.385874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.385884] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.385893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.396574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.405978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.406020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.406036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.406046] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.406055] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.416473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.426113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.426152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.426168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.426178] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.426187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.436510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.446205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.446243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.446260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.446270] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.446278] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.456494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.466237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.466271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.466287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.466297] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.466306] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.476661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.486195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.486234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.486251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.486260] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.486269] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.496745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.506322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.506368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.506388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.506398] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.506408] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.516684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.526410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.526444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.526460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.526469] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.526478] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.536814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.163 [2024-07-15 18:18:56.546576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.163 [2024-07-15 18:18:56.546615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.163 [2024-07-15 18:18:56.546631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.163 [2024-07-15 18:18:56.546640] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.163 [2024-07-15 18:18:56.546649] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.163 [2024-07-15 18:18:56.556825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.163 qpair failed and we were unable to recover it. 00:26:56.422 [2024-07-15 18:18:56.566529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.422 [2024-07-15 18:18:56.566569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.422 [2024-07-15 18:18:56.566585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.422 [2024-07-15 18:18:56.566595] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.422 [2024-07-15 18:18:56.566604] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.422 [2024-07-15 18:18:56.576922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.422 qpair failed and we were unable to recover it. 00:26:56.422 [2024-07-15 18:18:56.586477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.422 [2024-07-15 18:18:56.586518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.422 [2024-07-15 18:18:56.586534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.422 [2024-07-15 18:18:56.586544] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.422 [2024-07-15 18:18:56.586553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.422 [2024-07-15 18:18:56.596950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.422 qpair failed and we were unable to recover it. 00:26:56.422 [2024-07-15 18:18:56.606714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.422 [2024-07-15 18:18:56.606756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.422 [2024-07-15 18:18:56.606773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.422 [2024-07-15 18:18:56.606783] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.422 [2024-07-15 18:18:56.606792] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.422 [2024-07-15 18:18:56.617118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.626667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.626706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.626722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.626735] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.626744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.637083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.646748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.646787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.646803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.646813] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.646822] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.656967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.666732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.666770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.666787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.666796] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.666805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.677238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.686826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.686861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.686877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.686887] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.686896] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.697397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.706820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.706858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.706875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.706884] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.706893] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.717343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.726953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.726992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.727008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.727030] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.727040] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.737228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.747025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.747063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.747080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.747089] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.747098] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.757413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.767035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.767075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.767091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.767101] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.767109] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.777273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.423 [2024-07-15 18:18:56.787009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.423 [2024-07-15 18:18:56.787049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.423 [2024-07-15 18:18:56.787066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.423 [2024-07-15 18:18:56.787076] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.423 [2024-07-15 18:18:56.787085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.423 [2024-07-15 18:18:56.797622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.423 qpair failed and we were unable to recover it. 00:26:56.424 [2024-07-15 18:18:56.807078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.424 [2024-07-15 18:18:56.807117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.424 [2024-07-15 18:18:56.807137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.424 [2024-07-15 18:18:56.807146] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.424 [2024-07-15 18:18:56.807155] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.424 [2024-07-15 18:18:56.817625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.424 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.827235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.827279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.827296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.827306] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.827314] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.837487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.847245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.847282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.847299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.847309] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.847318] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.857604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.867352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.867390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.867406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.867416] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.867425] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.877713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.887352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.887392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.887408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.887417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.887426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.897809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.907369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.907408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.907425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.907435] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.907443] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.917831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.927534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.927574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.927590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.927599] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.927608] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.937801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.947531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.947572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.947590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.947600] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.947610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.684 [2024-07-15 18:18:56.957926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.684 qpair failed and we were unable to recover it. 00:26:56.684 [2024-07-15 18:18:56.967603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.684 [2024-07-15 18:18:56.967640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.684 [2024-07-15 18:18:56.967656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.684 [2024-07-15 18:18:56.967666] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.684 [2024-07-15 18:18:56.967675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.685 [2024-07-15 18:18:56.978197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.685 qpair failed and we were unable to recover it. 00:26:56.685 [2024-07-15 18:18:56.987634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.685 [2024-07-15 18:18:56.987677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.685 [2024-07-15 18:18:56.987694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.685 [2024-07-15 18:18:56.987703] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.685 [2024-07-15 18:18:56.987712] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.685 [2024-07-15 18:18:56.998141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.685 qpair failed and we were unable to recover it. 00:26:56.685 [2024-07-15 18:18:57.007712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.685 [2024-07-15 18:18:57.007751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.685 [2024-07-15 18:18:57.007768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.685 [2024-07-15 18:18:57.007778] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.685 [2024-07-15 18:18:57.007786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.685 [2024-07-15 18:18:57.018075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.685 qpair failed and we were unable to recover it. 00:26:56.685 [2024-07-15 18:18:57.027812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.685 [2024-07-15 18:18:57.027848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.685 [2024-07-15 18:18:57.027864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.685 [2024-07-15 18:18:57.027874] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.685 [2024-07-15 18:18:57.027883] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.685 [2024-07-15 18:18:57.038109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.685 qpair failed and we were unable to recover it. 00:26:56.685 [2024-07-15 18:18:57.047813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.685 [2024-07-15 18:18:57.047853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.685 [2024-07-15 18:18:57.047869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.685 [2024-07-15 18:18:57.047878] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.685 [2024-07-15 18:18:57.047888] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.685 [2024-07-15 18:18:57.058322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.685 qpair failed and we were unable to recover it. 00:26:56.685 [2024-07-15 18:18:57.067838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.685 [2024-07-15 18:18:57.067883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.685 [2024-07-15 18:18:57.067899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.685 [2024-07-15 18:18:57.067911] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.685 [2024-07-15 18:18:57.067921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.685 [2024-07-15 18:18:57.078373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.685 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 18:18:57.087916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.944 [2024-07-15 18:18:57.087954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.944 [2024-07-15 18:18:57.087971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.944 [2024-07-15 18:18:57.087981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.087990] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.098328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.108071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.108112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.108129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.108138] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.108147] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.118360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.128006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.128050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.128066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.128076] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.128084] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.138412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.148201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.148238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.148254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.148264] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.148273] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.158488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.168186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.168227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.168244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.168254] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.168263] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.178668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.188212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.188251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.188267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.188277] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.188285] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.198686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.208351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.208390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.208407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.208417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.208426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.218670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.228418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.228457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.228475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.228485] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.228494] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.238757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.248455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.248495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.248515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.248525] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.248534] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.258731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.268425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.268462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.268479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.268488] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.268497] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.278885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.288690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.288728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.288744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.288754] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.288763] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.299024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.308663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.308701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.308718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.308728] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.308737] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.319036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 18:18:57.328707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.945 [2024-07-15 18:18:57.328746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.945 [2024-07-15 18:18:57.328762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.945 [2024-07-15 18:18:57.328772] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.945 [2024-07-15 18:18:57.328781] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:56.945 [2024-07-15 18:18:57.339124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.945 qpair failed and we were unable to recover it. 00:26:57.205 [2024-07-15 18:18:57.348838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.205 [2024-07-15 18:18:57.348874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.205 [2024-07-15 18:18:57.348890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.205 [2024-07-15 18:18:57.348901] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.205 [2024-07-15 18:18:57.348910] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.205 [2024-07-15 18:18:57.359092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.205 qpair failed and we were unable to recover it. 00:26:57.205 [2024-07-15 18:18:57.368796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.205 [2024-07-15 18:18:57.368836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.205 [2024-07-15 18:18:57.368853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.205 [2024-07-15 18:18:57.368863] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.205 [2024-07-15 18:18:57.368872] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.205 [2024-07-15 18:18:57.379053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.205 qpair failed and we were unable to recover it. 00:26:57.205 [2024-07-15 18:18:57.388891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.205 [2024-07-15 18:18:57.388929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.205 [2024-07-15 18:18:57.388946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.205 [2024-07-15 18:18:57.388955] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.205 [2024-07-15 18:18:57.388964] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.205 [2024-07-15 18:18:57.399249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.205 qpair failed and we were unable to recover it. 00:26:57.205 [2024-07-15 18:18:57.408980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.205 [2024-07-15 18:18:57.409025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.205 [2024-07-15 18:18:57.409043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.205 [2024-07-15 18:18:57.409052] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.205 [2024-07-15 18:18:57.409062] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.205 [2024-07-15 18:18:57.419157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.205 qpair failed and we were unable to recover it. 00:26:57.205 [2024-07-15 18:18:57.429133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.205 [2024-07-15 18:18:57.429177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.205 [2024-07-15 18:18:57.429193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.205 [2024-07-15 18:18:57.429203] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.205 [2024-07-15 18:18:57.429212] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.205 [2024-07-15 18:18:57.439221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.205 qpair failed and we were unable to recover it. 00:26:57.205 [2024-07-15 18:18:57.448977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.205 [2024-07-15 18:18:57.449020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.205 [2024-07-15 18:18:57.449037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.205 [2024-07-15 18:18:57.449047] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.205 [2024-07-15 18:18:57.449055] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.205 [2024-07-15 18:18:57.459299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.469181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.469227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.469243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.469252] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.469261] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.479477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.489248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.489289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.489305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.489315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.489323] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.499526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.509239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.509274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.509291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.509303] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.509312] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.519473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.529265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.529303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.529319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.529329] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.529337] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.539588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.549333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.549373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.549389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.549399] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.549408] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.559652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.569520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.569555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.569571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.569581] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.569590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.579698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.206 [2024-07-15 18:18:57.589386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.206 [2024-07-15 18:18:57.589424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.206 [2024-07-15 18:18:57.589441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.206 [2024-07-15 18:18:57.589451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.206 [2024-07-15 18:18:57.589459] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.206 [2024-07-15 18:18:57.599745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.206 qpair failed and we were unable to recover it. 00:26:57.465 [2024-07-15 18:18:57.609540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.465 [2024-07-15 18:18:57.609578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.465 [2024-07-15 18:18:57.609595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.465 [2024-07-15 18:18:57.609605] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.465 [2024-07-15 18:18:57.609613] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.465 [2024-07-15 18:18:57.619851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.465 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.629615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.629655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.629671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.629681] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.629690] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.639761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.649666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.649699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.649716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.649726] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.649734] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.659838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.669677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.669718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.669734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.669744] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.669753] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.679996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.689776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.689813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.689834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.689843] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.689852] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.700137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.709761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.709807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.709824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.709833] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.709842] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.720044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.729872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.729911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.729928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.729938] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.729946] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.740084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.749974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.750009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.750038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.750047] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.750056] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.760194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.770017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.770057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.770074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.770083] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.770092] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.780321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.789985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.790031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.790047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.790056] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.790065] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.800342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.810009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.810055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.810072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.810082] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.810090] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.820494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.830154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.830191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.830208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.830217] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.830226] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.840440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.466 [2024-07-15 18:18:57.850304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.466 [2024-07-15 18:18:57.850344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.466 [2024-07-15 18:18:57.850360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.466 [2024-07-15 18:18:57.850370] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.466 [2024-07-15 18:18:57.850379] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.466 [2024-07-15 18:18:57.860493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.466 qpair failed and we were unable to recover it. 00:26:57.727 [2024-07-15 18:18:57.870278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.727 [2024-07-15 18:18:57.870324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.727 [2024-07-15 18:18:57.870342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.727 [2024-07-15 18:18:57.870352] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.727 [2024-07-15 18:18:57.870361] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.727 [2024-07-15 18:18:57.880632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.727 qpair failed and we were unable to recover it. 00:26:57.727 [2024-07-15 18:18:57.890397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.727 [2024-07-15 18:18:57.890432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:57.890449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:57.890458] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:57.890467] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:57.900848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:57.910502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:57.910541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:57.910558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:57.910568] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:57.910577] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:57.920990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:57.930542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:57.930580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:57.930596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:57.930606] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:57.930615] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:57.940878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:57.950525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:57.950565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:57.950581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:57.950594] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:57.950602] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:57.961045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:57.970598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:57.970636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:57.970652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:57.970662] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:57.970671] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:57.981148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:57.990734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:57.990770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:57.990785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:57.990795] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:57.990804] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.001183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:58.010723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:58.010761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:58.010777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:58.010787] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:58.010796] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.021229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:58.030774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:58.030814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:58.030831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:58.030840] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:58.030849] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.041272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:58.050827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:58.050866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:58.050883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:58.050892] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:58.050901] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.061381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:58.070917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:58.070954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:58.070970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:58.070980] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:58.070989] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.081365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:58.090944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:58.090983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:58.090999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:58.091009] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:58.091022] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.101427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:57.728 [2024-07-15 18:18:58.110995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.728 [2024-07-15 18:18:58.111043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.728 [2024-07-15 18:18:58.111060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.728 [2024-07-15 18:18:58.111070] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.728 [2024-07-15 18:18:58.111079] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:57.728 [2024-07-15 18:18:58.121542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.728 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.131121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.131154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.131174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.131184] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.131192] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.141431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.151263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.151302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.151319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.151328] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.151338] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.161665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.171178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.171214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.171231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.171240] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.171249] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.181520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.191242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.191288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.191304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.191314] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.191322] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.201650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.211354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.211394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.211410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.211420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.211428] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.221814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.231412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.231453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.231470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.231480] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.231488] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.241948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.002 [2024-07-15 18:18:58.251504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.002 [2024-07-15 18:18:58.251540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.002 [2024-07-15 18:18:58.251556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.002 [2024-07-15 18:18:58.251566] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.002 [2024-07-15 18:18:58.251575] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:26:58.002 [2024-07-15 18:18:58.261899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.002 qpair failed and we were unable to recover it. 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Read completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 Write completed with error (sct=0, sc=8) 00:26:58.938 starting I/O failed 00:26:58.938 [2024-07-15 18:18:59.266990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.938 [2024-07-15 18:18:59.274178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.938 [2024-07-15 18:18:59.274223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.938 [2024-07-15 18:18:59.274242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.938 [2024-07-15 18:18:59.274253] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.938 [2024-07-15 18:18:59.274262] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:26:58.938 [2024-07-15 18:18:59.284894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.938 qpair failed and we were unable to recover it. 00:26:58.938 [2024-07-15 18:18:59.294540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.938 [2024-07-15 18:18:59.294583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.938 [2024-07-15 18:18:59.294599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.938 [2024-07-15 18:18:59.294609] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.938 [2024-07-15 18:18:59.294618] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:26:58.938 [2024-07-15 18:18:59.304784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:58.938 qpair failed and we were unable to recover it. 00:26:58.938 [2024-07-15 18:18:59.314569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.938 [2024-07-15 18:18:59.314609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.938 [2024-07-15 18:18:59.314629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.938 [2024-07-15 18:18:59.314640] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.938 [2024-07-15 18:18:59.314650] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:58.938 [2024-07-15 18:18:59.324924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:58.938 qpair failed and we were unable to recover it. 00:26:58.938 [2024-07-15 18:18:59.334541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.938 [2024-07-15 18:18:59.334579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.938 [2024-07-15 18:18:59.334597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.938 [2024-07-15 18:18:59.334607] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.938 [2024-07-15 18:18:59.334616] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:26:59.196 [2024-07-15 18:18:59.344927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 18:18:59.345054] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:59.196 A controller has encountered a failure and is being reset. 00:26:59.196 [2024-07-15 18:18:59.354790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.196 [2024-07-15 18:18:59.354841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.196 [2024-07-15 18:18:59.354868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.196 [2024-07-15 18:18:59.354883] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.196 [2024-07-15 18:18:59.354896] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:59.196 [2024-07-15 18:18:59.365087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 18:18:59.374722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.196 [2024-07-15 18:18:59.374763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.196 [2024-07-15 18:18:59.374781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.196 [2024-07-15 18:18:59.374791] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.196 [2024-07-15 18:18:59.374799] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:26:59.196 [2024-07-15 18:18:59.385022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 18:18:59.385147] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:26:59.196 [2024-07-15 18:18:59.419354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:59.196 Controller properly reset. 00:26:59.196 Initializing NVMe Controllers 00:26:59.197 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.197 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.197 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:59.197 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:59.197 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:59.197 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:59.197 Initialization complete. Launching workers. 00:26:59.197 Starting thread on core 1 00:26:59.197 Starting thread on core 2 00:26:59.197 Starting thread on core 3 00:26:59.197 Starting thread on core 0 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:59.197 00:26:59.197 real 0m12.571s 00:26:59.197 user 0m26.898s 00:26:59.197 sys 0m3.276s 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.197 ************************************ 00:26:59.197 END TEST nvmf_target_disconnect_tc2 00:26:59.197 ************************************ 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:59.197 ************************************ 00:26:59.197 START TEST nvmf_target_disconnect_tc3 00:26:59.197 ************************************ 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc3 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1803544 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:26:59.197 18:18:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:26:59.454 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.354 18:19:01 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1802189 00:27:01.354 18:19:01 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Write completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 Read completed with error (sct=0, sc=8) 00:27:02.730 starting I/O failed 00:27:02.730 [2024-07-15 18:19:02.761617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:03.296 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1802189 Killed "${NVMF_APP[@]}" "$@" 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1804184 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1804184 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1804184 ']' 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.296 18:19:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:03.296 [2024-07-15 18:19:03.623934] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:27:03.296 [2024-07-15 18:19:03.623985] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.296 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.554 [2024-07-15 18:19:03.725212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Write completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 Read completed with error (sct=0, sc=8) 00:27:03.554 starting I/O failed 00:27:03.554 [2024-07-15 18:19:03.766917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:03.554 [2024-07-15 18:19:03.794223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.554 [2024-07-15 18:19:03.794261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.554 [2024-07-15 18:19:03.794270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.554 [2024-07-15 18:19:03.794283] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.554 [2024-07-15 18:19:03.794290] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.554 [2024-07-15 18:19:03.794409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:03.554 [2024-07-15 18:19:03.794523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:03.554 [2024-07-15 18:19:03.794631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:03.554 [2024-07-15 18:19:03.794633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.118 Malloc0 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.118 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.376 [2024-07-15 18:19:04.525362] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2031fc0/0x203db40) succeed. 00:27:04.376 [2024-07-15 18:19:04.534937] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2033600/0x207f1d0) succeed. 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.376 [2024-07-15 18:19:04.673803] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.376 18:19:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1803544 00:27:04.376 Write completed with error (sct=0, sc=8) 00:27:04.376 starting I/O failed 00:27:04.376 Read completed with error (sct=0, sc=8) 00:27:04.376 starting I/O failed 00:27:04.376 Read completed with error (sct=0, sc=8) 00:27:04.376 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Write completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 Read completed with error (sct=0, sc=8) 00:27:04.377 starting I/O failed 00:27:04.377 [2024-07-15 18:19:04.772076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:04.377 [2024-07-15 18:19:04.773799] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:04.377 [2024-07-15 18:19:04.773819] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:04.377 [2024-07-15 18:19:04.773828] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:05.753 [2024-07-15 18:19:05.777866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:05.753 qpair failed and we were unable to recover it. 00:27:05.753 [2024-07-15 18:19:05.779322] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:05.754 [2024-07-15 18:19:05.779341] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:05.754 [2024-07-15 18:19:05.779349] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:06.690 [2024-07-15 18:19:06.783231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:06.690 qpair failed and we were unable to recover it. 00:27:06.690 [2024-07-15 18:19:06.784644] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:06.690 [2024-07-15 18:19:06.784661] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:06.690 [2024-07-15 18:19:06.784669] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:07.625 [2024-07-15 18:19:07.788580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:07.625 qpair failed and we were unable to recover it. 00:27:07.625 [2024-07-15 18:19:07.790130] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:07.625 [2024-07-15 18:19:07.790148] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:07.625 [2024-07-15 18:19:07.790157] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:08.559 [2024-07-15 18:19:08.793969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:08.559 qpair failed and we were unable to recover it. 00:27:08.559 [2024-07-15 18:19:08.795488] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:08.559 [2024-07-15 18:19:08.795507] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:08.559 [2024-07-15 18:19:08.795516] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:09.494 [2024-07-15 18:19:09.799390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:09.494 qpair failed and we were unable to recover it. 00:27:09.494 [2024-07-15 18:19:09.800991] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:09.494 [2024-07-15 18:19:09.801008] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:09.494 [2024-07-15 18:19:09.801020] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:10.426 [2024-07-15 18:19:10.804919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:10.426 qpair failed and we were unable to recover it. 00:27:10.426 [2024-07-15 18:19:10.806366] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:10.426 [2024-07-15 18:19:10.806385] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:10.426 [2024-07-15 18:19:10.806393] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:11.803 [2024-07-15 18:19:11.810211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:11.803 qpair failed and we were unable to recover it. 00:27:11.803 [2024-07-15 18:19:11.812018] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:11.803 [2024-07-15 18:19:11.812043] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:11.803 [2024-07-15 18:19:11.812052] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:12.736 [2024-07-15 18:19:12.815965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:12.736 qpair failed and we were unable to recover it. 00:27:12.736 [2024-07-15 18:19:12.817519] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:12.736 [2024-07-15 18:19:12.817537] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:12.736 [2024-07-15 18:19:12.817545] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:13.686 [2024-07-15 18:19:13.821394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:13.686 qpair failed and we were unable to recover it. 00:27:13.686 [2024-07-15 18:19:13.821498] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:13.686 A controller has encountered a failure and is being reset. 00:27:13.686 Resorting to new failover address 192.168.100.9 00:27:13.686 [2024-07-15 18:19:13.823214] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:13.686 [2024-07-15 18:19:13.823243] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:13.686 [2024-07-15 18:19:13.823255] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:14.704 [2024-07-15 18:19:14.827158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:14.704 qpair failed and we were unable to recover it. 00:27:14.704 [2024-07-15 18:19:14.828661] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:14.704 [2024-07-15 18:19:14.828678] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:14.704 [2024-07-15 18:19:14.828686] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:15.638 [2024-07-15 18:19:15.832576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.638 qpair failed and we were unable to recover it. 00:27:15.638 [2024-07-15 18:19:15.832674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.638 [2024-07-15 18:19:15.832783] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:15.638 [2024-07-15 18:19:15.834891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:15.638 Controller properly reset. 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Read completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 Write completed with error (sct=0, sc=8) 00:27:16.574 starting I/O failed 00:27:16.574 [2024-07-15 18:19:16.879760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:16.574 Initializing NVMe Controllers 00:27:16.574 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.574 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:16.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:16.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:16.574 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:16.574 Initialization complete. Launching workers. 00:27:16.574 Starting thread on core 1 00:27:16.574 Starting thread on core 2 00:27:16.574 Starting thread on core 3 00:27:16.574 Starting thread on core 0 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:27:16.574 00:27:16.574 real 0m17.370s 00:27:16.574 user 0m59.644s 00:27:16.574 sys 0m5.626s 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.574 ************************************ 00:27:16.574 END TEST nvmf_target_disconnect_tc3 00:27:16.574 ************************************ 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.574 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:16.833 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:16.833 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:16.833 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:16.833 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.833 18:19:16 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:16.833 rmmod nvme_rdma 00:27:16.833 rmmod nvme_fabrics 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1804184 ']' 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1804184 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1804184 ']' 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1804184 00:27:16.833 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1804184 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1804184' 00:27:16.834 killing process with pid 1804184 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1804184 00:27:16.834 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1804184 00:27:17.092 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.092 18:19:17 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:17.092 00:27:17.092 real 0m39.805s 00:27:17.092 user 2m23.351s 00:27:17.092 sys 0m15.819s 00:27:17.092 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.092 18:19:17 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:17.092 ************************************ 00:27:17.092 END TEST nvmf_target_disconnect 00:27:17.092 ************************************ 00:27:17.092 18:19:17 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:27:17.092 18:19:17 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:17.092 18:19:17 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.092 18:19:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:17.092 18:19:17 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:17.092 00:27:17.092 real 19m20.997s 00:27:17.092 user 44m20.952s 00:27:17.092 sys 6m5.997s 00:27:17.092 18:19:17 nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:17.092 18:19:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:17.092 ************************************ 00:27:17.092 END TEST nvmf_rdma 00:27:17.092 ************************************ 00:27:17.092 18:19:17 -- common/autotest_common.sh@1142 -- # return 0 00:27:17.092 18:19:17 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:17.092 18:19:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:17.092 18:19:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.092 18:19:17 -- common/autotest_common.sh@10 -- # set +x 00:27:17.350 ************************************ 00:27:17.350 START TEST spdkcli_nvmf_rdma 00:27:17.350 ************************************ 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:17.350 * Looking for test storage... 00:27:17.350 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.350 18:19:17 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1806601 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1806601 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@829 -- # '[' -z 1806601 ']' 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.351 18:19:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:17.351 [2024-07-15 18:19:17.689136] Starting SPDK v24.09-pre git sha1 2da93d0d7 / DPDK 24.03.0 initialization... 00:27:17.351 [2024-07-15 18:19:17.689191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806601 ] 00:27:17.351 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.609 [2024-07-15 18:19:17.771212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:17.609 [2024-07-15 18:19:17.844172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.609 [2024-07-15 18:19:17.844176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # return 0 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:18.175 18:19:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:28.151 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.151 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.151 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.151 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:28.152 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:28.152 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:28.152 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:28.152 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:28.152 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:28.152 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:28.152 altname enp217s0f0np0 00:27:28.152 altname ens818f0np0 00:27:28.152 inet 192.168.100.8/24 scope global mlx_0_0 00:27:28.152 valid_lft forever preferred_lft forever 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:28.152 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:28.152 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:28.152 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:28.152 altname enp217s0f1np1 00:27:28.152 altname ens818f1np1 00:27:28.153 inet 192.168.100.9/24 scope global mlx_0_1 00:27:28.153 valid_lft forever preferred_lft forever 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:28.153 18:19:26 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:28.153 192.168.100.9' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:28.153 192.168.100.9' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:28.153 192.168.100.9' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:28.153 18:19:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:28.153 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:28.153 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:28.153 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:28.153 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:28.153 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:28.153 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:28.153 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:28.153 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:28.153 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:28.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:28.153 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:28.153 ' 00:27:29.090 [2024-07-15 18:19:29.486495] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23d5aa0/0x225c780) succeed. 00:27:29.348 [2024-07-15 18:19:29.496651] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23d6f50/0x2347840) succeed. 00:27:30.725 [2024-07-15 18:19:30.726505] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:27:32.627 [2024-07-15 18:19:32.889373] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:27:34.529 [2024-07-15 18:19:34.747563] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:27:35.906 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:35.906 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:35.906 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:35.906 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:35.906 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:35.906 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:35.906 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:35.906 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:35.906 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:35.906 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:35.906 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:35.906 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:35.906 18:19:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:35.906 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.906 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:36.239 18:19:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:36.239 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.239 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:36.239 18:19:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:27:36.239 18:19:36 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:36.497 18:19:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:36.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:36.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:36.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:36.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:27:36.497 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:27:36.497 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:36.497 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:36.497 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:36.497 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:36.497 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:36.497 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:36.497 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:36.497 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:36.497 ' 00:27:41.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:41.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:41.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:41.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:41.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:27:41.765 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:27:41.765 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:41.765 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:41.765 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:41.765 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:41.765 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:41.765 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:41.765 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:41.765 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1806601 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # '[' -z 1806601 ']' 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # kill -0 1806601 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # uname 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1806601 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1806601' 00:27:41.765 killing process with pid 1806601 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # kill 1806601 00:27:41.765 18:19:41 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # wait 1806601 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:41.765 rmmod nvme_rdma 00:27:41.765 rmmod nvme_fabrics 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:41.765 00:27:41.765 real 0m24.615s 00:27:41.765 user 0m52.305s 00:27:41.765 sys 0m7.473s 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:41.765 18:19:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:41.765 ************************************ 00:27:41.765 END TEST spdkcli_nvmf_rdma 00:27:41.765 ************************************ 00:27:42.025 18:19:42 -- common/autotest_common.sh@1142 -- # return 0 00:27:42.025 18:19:42 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:42.025 18:19:42 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:42.025 18:19:42 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:42.025 18:19:42 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:42.025 18:19:42 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:42.025 18:19:42 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:42.025 18:19:42 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:42.025 18:19:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:42.025 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:27:42.025 18:19:42 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:42.025 18:19:42 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:42.025 18:19:42 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:42.025 18:19:42 -- common/autotest_common.sh@10 -- # set +x 00:27:48.590 INFO: APP EXITING 00:27:48.590 INFO: killing all VMs 00:27:48.590 INFO: killing vhost app 00:27:48.590 INFO: EXIT DONE 00:27:51.877 Waiting for block devices as requested 00:27:51.877 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:51.877 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:51.877 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:51.877 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:52.136 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:52.136 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:52.136 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:52.394 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:52.394 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:52.394 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:52.674 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:52.674 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:52.674 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:52.933 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:52.933 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:52.933 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:53.191 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:57.382 Cleaning 00:27:57.382 Removing: /var/run/dpdk/spdk0/config 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:57.382 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:57.382 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:57.382 Removing: /var/run/dpdk/spdk1/config 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:57.382 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:57.382 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:57.382 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:57.382 Removing: /var/run/dpdk/spdk2/config 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:57.382 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:57.382 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:57.382 Removing: /var/run/dpdk/spdk3/config 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:57.382 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:57.382 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:57.382 Removing: /var/run/dpdk/spdk4/config 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:57.382 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:57.382 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:57.382 Removing: /dev/shm/bdevperf_trace.pid1591622 00:27:57.382 Removing: /dev/shm/bdevperf_trace.pid1711316 00:27:57.382 Removing: /dev/shm/bdev_svc_trace.1 00:27:57.382 Removing: /dev/shm/nvmf_trace.0 00:27:57.382 Removing: /dev/shm/spdk_tgt_trace.pid1465589 00:27:57.382 Removing: /var/run/dpdk/spdk0 00:27:57.382 Removing: /var/run/dpdk/spdk1 00:27:57.382 Removing: /var/run/dpdk/spdk2 00:27:57.382 Removing: /var/run/dpdk/spdk3 00:27:57.382 Removing: /var/run/dpdk/spdk4 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1462852 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1464123 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1465589 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1466080 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1467129 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1467407 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1468335 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1468534 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1468902 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1474520 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1476079 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1476390 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1476720 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1477072 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1477406 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1477587 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1477822 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1478134 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1478986 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1482136 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1482440 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1482741 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1482966 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1483569 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1483589 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1484146 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1484335 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1484589 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1484722 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1485012 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1485041 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1485653 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1485942 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1486258 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1486500 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1486596 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1486667 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1486952 00:27:57.382 Removing: /var/run/dpdk/spdk_pid1487233 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1487521 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1487802 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1488092 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1488371 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1488652 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1488939 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1489175 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1489398 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1489620 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1489847 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1490122 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1490403 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1490688 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1490974 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1491261 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1491548 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1491827 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1492120 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1492186 00:27:57.383 Removing: /var/run/dpdk/spdk_pid1492533 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1497590 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1546048 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1551045 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1562924 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1568841 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1573206 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1574013 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1581564 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1591622 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1591912 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1596919 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1603535 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1606811 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1618489 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1646829 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1651032 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1709170 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1710123 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1711316 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1716189 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1724835 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1725712 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1726689 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1727537 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1728022 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1733039 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1733117 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1738371 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1739089 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1740137 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1740769 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1740954 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1746524 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1746988 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1752032 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1754889 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1761086 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1772053 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1772078 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1794217 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1794490 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1801075 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1801641 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1803544 00:27:57.642 Removing: /var/run/dpdk/spdk_pid1806601 00:27:57.642 Clean 00:27:57.901 18:19:58 -- common/autotest_common.sh@1451 -- # return 0 00:27:57.901 18:19:58 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:57.901 18:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.901 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:27:57.901 18:19:58 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:57.901 18:19:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.901 18:19:58 -- common/autotest_common.sh@10 -- # set +x 00:27:57.901 18:19:58 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:27:57.901 18:19:58 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:27:57.901 18:19:58 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:27:57.901 18:19:58 -- spdk/autotest.sh@391 -- # hash lcov 00:27:57.901 18:19:58 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:57.901 18:19:58 -- spdk/autotest.sh@393 -- # hostname 00:27:57.901 18:19:58 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:27:58.160 geninfo: WARNING: invalid characters removed from testname! 00:28:20.172 18:20:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:20.172 18:20:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:21.110 18:20:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:23.014 18:20:23 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:24.390 18:20:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:26.294 18:20:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:28:27.670 18:20:27 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:27.670 18:20:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:27.670 18:20:28 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:27.670 18:20:28 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.670 18:20:28 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.670 18:20:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.670 18:20:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.670 18:20:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.670 18:20:28 -- paths/export.sh@5 -- $ export PATH 00:28:27.670 18:20:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.670 18:20:28 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:28:27.670 18:20:28 -- common/autobuild_common.sh@444 -- $ date +%s 00:28:27.670 18:20:28 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721060428.XXXXXX 00:28:27.670 18:20:28 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721060428.v0oRTb 00:28:27.670 18:20:28 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:28:27.670 18:20:28 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:28:27.670 18:20:28 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:28:27.670 18:20:28 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:27.670 18:20:28 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:27.670 18:20:28 -- common/autobuild_common.sh@460 -- $ get_config_params 00:28:27.670 18:20:28 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:27.670 18:20:28 -- common/autotest_common.sh@10 -- $ set +x 00:28:27.930 18:20:28 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:28:27.930 18:20:28 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:28:27.930 18:20:28 -- pm/common@17 -- $ local monitor 00:28:27.930 18:20:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.930 18:20:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.930 18:20:28 -- pm/common@21 -- $ date +%s 00:28:27.930 18:20:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.930 18:20:28 -- pm/common@21 -- $ date +%s 00:28:27.930 18:20:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:27.930 18:20:28 -- pm/common@25 -- $ sleep 1 00:28:27.930 18:20:28 -- pm/common@21 -- $ date +%s 00:28:27.930 18:20:28 -- pm/common@21 -- $ date +%s 00:28:27.930 18:20:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721060428 00:28:27.930 18:20:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721060428 00:28:27.930 18:20:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721060428 00:28:27.930 18:20:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721060428 00:28:27.930 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721060428_collect-vmstat.pm.log 00:28:27.930 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721060428_collect-cpu-load.pm.log 00:28:27.930 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721060428_collect-cpu-temp.pm.log 00:28:27.930 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721060428_collect-bmc-pm.bmc.pm.log 00:28:28.867 18:20:29 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:28:28.867 18:20:29 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:28:28.867 18:20:29 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:28.868 18:20:29 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:28.868 18:20:29 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:28.868 18:20:29 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:28.868 18:20:29 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:28.868 18:20:29 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:28.868 18:20:29 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:28.868 18:20:29 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:28.868 18:20:29 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:28.868 18:20:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:28.868 18:20:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:28.868 18:20:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:28.868 18:20:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:28.868 18:20:29 -- pm/common@44 -- $ pid=1825912 00:28:28.868 18:20:29 -- pm/common@50 -- $ kill -TERM 1825912 00:28:28.868 18:20:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:28.868 18:20:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:28.868 18:20:29 -- pm/common@44 -- $ pid=1825913 00:28:28.868 18:20:29 -- pm/common@50 -- $ kill -TERM 1825913 00:28:28.868 18:20:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:28.868 18:20:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:28.868 18:20:29 -- pm/common@44 -- $ pid=1825917 00:28:28.868 18:20:29 -- pm/common@50 -- $ kill -TERM 1825917 00:28:28.868 18:20:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:28.868 18:20:29 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:28.868 18:20:29 -- pm/common@44 -- $ pid=1825933 00:28:28.868 18:20:29 -- pm/common@50 -- $ sudo -E kill -TERM 1825933 00:28:28.868 + [[ -n 1345673 ]] 00:28:28.868 + sudo kill 1345673 00:28:28.878 [Pipeline] } 00:28:28.896 [Pipeline] // stage 00:28:28.902 [Pipeline] } 00:28:28.920 [Pipeline] // timeout 00:28:28.925 [Pipeline] } 00:28:28.942 [Pipeline] // catchError 00:28:28.947 [Pipeline] } 00:28:28.965 [Pipeline] // wrap 00:28:28.971 [Pipeline] } 00:28:28.987 [Pipeline] // catchError 00:28:28.995 [Pipeline] stage 00:28:28.997 [Pipeline] { (Epilogue) 00:28:29.010 [Pipeline] catchError 00:28:29.011 [Pipeline] { 00:28:29.023 [Pipeline] echo 00:28:29.024 Cleanup processes 00:28:29.029 [Pipeline] sh 00:28:29.309 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:29.309 1826011 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:28:29.309 1826358 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:29.327 [Pipeline] sh 00:28:29.609 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:28:29.609 ++ grep -v 'sudo pgrep' 00:28:29.609 ++ awk '{print $1}' 00:28:29.609 + sudo kill -9 1826011 00:28:29.621 [Pipeline] sh 00:28:30.001 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:30.001 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:28:34.195 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:28:37.497 [Pipeline] sh 00:28:37.779 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:37.780 Artifacts sizes are good 00:28:37.794 [Pipeline] archiveArtifacts 00:28:37.801 Archiving artifacts 00:28:37.926 [Pipeline] sh 00:28:38.210 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:28:38.228 [Pipeline] cleanWs 00:28:38.239 [WS-CLEANUP] Deleting project workspace... 00:28:38.239 [WS-CLEANUP] Deferred wipeout is used... 00:28:38.246 [WS-CLEANUP] done 00:28:38.247 [Pipeline] } 00:28:38.261 [Pipeline] // catchError 00:28:38.270 [Pipeline] sh 00:28:38.550 + logger -p user.info -t JENKINS-CI 00:28:38.559 [Pipeline] } 00:28:38.577 [Pipeline] // stage 00:28:38.583 [Pipeline] } 00:28:38.600 [Pipeline] // node 00:28:38.605 [Pipeline] End of Pipeline 00:28:38.637 Finished: SUCCESS